I am using Talend 6.0.1. On two different windows machine using similar configuration
Machine 1 - using windows server
Machine 2 -is on google cloud
Talend in both are connected to ms SQL server
When I use guess schema Machine1 shows data type as double
for same fields guess schema in Machine2 shows data type as big decimal.
I have checked the configuration in both machines are similar.
Can anyone help me to find me the reason?
Regards,
V
enter image description here
Related
I have an AWS RDS (PostgreSQL) that is inside a private network - only accessible via a VPN and Bastian Host.
I am able to establish connection from PBI Desktop to "PostgreSQL-RDS Instance." By creating SSH tunneling from my Laptop (localhost) to Bastian Host using ODBC Driver. With this approach all the data is imported onto PBI desktop(import mode).
But our requirement is to establish connection through a direct query to refresh data real time and generate the Reports Dynamically which I am not able to.
I entered the database credentials into the Power BI desktop tool, and it not working correctly in the power bi desktop, getting a Timeout Error.
I must use direct query, I can't use import.
Any help is appreciated.
An exact error that you are getting would help get to the root cause of the issue. However, a few basic troubleshooting steps that I'd suggest are:
Ensure that you have a compatible version of the software installed on your machine such as the Npgsql-4.0.9. AT times the latest version of the software usually causes issues.
Ensure that you remove the semicolon at the end of the query.
Once you get the query running successfully on the desktop version, when you publish it to the web version, the visuals will not be able to connect to the database unless an on-premises data gateway is setup. To do so, more details on setting up a data gateway to automatically refresh the dataset for the power bi web version are here:
Refresh AWS RDS database from Power BI Web you are successfully able to query directly
I am new in the form and also new in postgresql.
Normally I use MySQL for my project but I’ve decided to start migrating towards postgresql for some valid reasons which I found in this database.
Expanding on the problem:
I need to analyze data via some mathematical formulas but in order to do this I need to get the data from the software via the API.
The software, the API and Postgresql v. 11.4 which I installed on a desktop are running on windows. So far I’ve managed to take the data via the API and import it into Postgreql.
My problem is how to transfer this data from
the local Postgresql (on the PC ) to a web Postgresql (installed in a Web server ) which is running Linux.
For example if I take the data every five minutes from software via API and put it in local db postgresql, how can I transfer this data (automatically if possible) to the db in the web server running Linux? I rejected a data dump because importing the whole db every time is not viable.
What I would like is to import only the five-minute data which gradually adds to the previous data.
I also rejected the idea of making a master - slave architecture
because not knowing the total amount of data, on the web server I have almost 2 Tb of hard disk while on the local pc I have only one hard disk that serves only to take the data and then to send it to the web server for the analysis.
Could someone please help by giving some good advice regarding how to achieve this objective?
Thanks to all for any answers.
Environment:
IBM Worklight 6.2,
IBM Liberty 8.5.5.1,
IBM DB2 10.5 &
Windows 2008 standard Edition.
For the High Availability of DB instance[WLDBINST], the following Architecture I have followed.
2 Windows Clustered Machines with IBM DB2 binary and SAN storage used to share the Database file in Common.
If any 1 node is not available the other node will take over the control without any loss of the data.
I have tested the DB2 instance via Cluster IP and it works fine.
The below error has been logged, when I run the Worklight Server Configuration tool,
Instance WLDBINST not found on server. Found only [WLDBINST C, :, DB2CLUSTER, DB2]
I have found the reason for the above issue. To list the DB2 Instances we can use the command db2ilist
C:\>db2ilist
WLDBINST C : DB2CLUSTER
DB2
Above result shows that we have two instances
WLDBINST which is in "C" drive and part of DB2CLUSTER &
DB2
Worklight Configuration tool also uses the similar DB2 tool to list the instances, I guess.
So the configuration tool considering the result as 4 instances as follows,
WLDBINST C,
:,
DB2CLUSTER and
DB2
How I can resolve this issue.
If the Server Configuration Tool is not able to create the database for your topology, you should create it manually before running the tool.
For the Administration database, the doc is here:
https://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.installconfig.doc/admin/t_creating_the_db2_database_for_wladmin.html
For the Project Runtime databases, the doc is here:
https://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.deploy.doc/admin/t_creating_the_db2_databases.html
The server configuration tool will not do any specific configuration to ensure that Liberty reopens a connection if there is a database node switch. I recommend that you review the behavior of Liberty in this case, and add settings in the server.xml as required.
I am facing a problem with mongo DB connection.
I have succefully imported tMongo components it to my Talend Open Studio 5.1.1 and by copying the mongo 1.3.jar file to lib/java folder, my Mongo DB jobs are running successfully, but the problem is even if I provide some fake server path(IP) and fake port for mongoDB, my job is running without an error and it is giving me 1 row with no data. and same goes with right IP and port.
How do I resolve it.
I think the connection is not working. As you must be knowing, mongoDB checks that the connection is actually working or not when you perform a query on it.
(Yeah, it doesn't check for a successful connection when you just connect to it ).
I would suggest to instead add the mongoDB components present in Talend for Big Data by following the steps below:
Components provided for MongoDB are :
tMongoDBInput, tMongoDBOutput, tMongoDBConnection etc.
Or you can Download the components from http://www.talendforge.org/exchange/ and search for Mongo instead of using Talend Big Data. But I would suggest use Talend for big Data for it.
The components will be zipped format , Unzip the same. In Talend Big data you will find the components in Component folder.
Copy these Unzipped Components to the installation Path of TOS.
C:TalendTOS_DI-Win32-r84309V5.1.1pluginsorg.talend.designer.components.localprovider_5.1.1.r84309components
Copy the mongo-1.3.jar file in the component folder into the C:TalendTOS_DI-Win32-r84309-V5.1.1libjava
In many systems you might not be able to see this file then go with ADMINISTRATOR priviliges.
optional for few systems——>>> Inside index.xml add
save index.xml
Restart TOS
Then you will be able to use them as normal components.
Cheers!
The reason for the Job running without any error could be due to the connection / meta-data you have used for the Mongo Connector. It doesn't is not possible for the job to run without any error even after giving fakepath.
I guess you might configured (re-modified) the repository connection but using a built-in meta data for component.
TL;DR How can I use PowerCLI to determine if EMC PowerPath is installed on an ESX host?
I am attempting to write a script that will perform a host-masking operation when moving a LUN from one storage group to another. This is to accommodate the All Paths Down error that can occur due to a race condition in ESX 4.1. The steps are described in VMWare KB 1015084 and 1009449. These steps are written for use from the service console. I want to avoid scripting SSH activity and instead do the entire thing in Powershell/PowerCLI.
In our environment, we are using EMC PowerPath on most - but not all - of our hosts. This LUN masking only needs to be performed on hosts where PowerPath is installed, so I am attempting to test each host to determine this.
I have been pulling my hair out trying to determine how to do this with PowerCLI. If connected to the ESX service console, the command esxcfg-mpath --list-plugins will show if PowerPath is installed. In the vCenter GUI, it can be determined by:
Select Host -> Configuration -> Storage Adapters -> Select Adapter -> View Devices -> Examine "Owner" column
Using get-scsilun in PowerCLI returns an object that contains all this information except this Owner column.
I am stumped. I had hoped that a get-esxcli object would have some kind of equivalent methods, maybe in satp or nmp, but so far I can't find anything.
As suggested, I'll answer my own question:
The answer is: $esxcli.corestorage.plugin.list() will return a list of plugins installed on the host.
To get this information from PowerCLI 6.5 you can use the following:
(Get-ESXCLI -VMHost <host>).Storage.Core.Plugin.List()