I installed Orange and I have data in a local PGSQL server.
PGSQL listens on the default port which is 5432.
I have the psycopg2 lib installed, and I also wrote a short python script which pulls some data from the database to check the module is insatlled correctly.
Firewall is down.
Python Env Path is set to use 3.4.4 which is what Orange3 uses.
When I add a sql table widget, I get an error suggesting "please install a backend to use this widgt"
Documentaion in Orange site mentions that all that needs to be done for the DB integration is just installing the python module, but this doesnt work for me.
Help would be appreciated.
Links:
https://docs.orange.biolab.si/3/visual-programming/widgets/data/sqltable.html
Related
I need to poll DB2 database table until a record is created and once it is successful,i need to trigger a task in dag.
For this I have to do below:
Create a DB2 connection in airflow
Use SQL sensor to poll db table using the connection created in above step.
But I don't see the connection type option for DB2 (screenshot attached). Do I have to install something? Please help.
PS: I am new to Airflow.
There is no DB2 specific connection.
For general connections you can just use the Generic connection:
For older Airflow versions that doesn't have Generic connection you can use any other connection type (HTTP/MySql for example). It doesn't really matter. Airflow looks for connection by the connection id. The type is almost meaningless in that perspective.
Yes the airflow connection does not appear to be a specific connection for db2, so you have to choose generic in connection Type.
I also recommend to install the security mechanism for connection.
the next package: pip install airflow-provider-db2
I have tried to connect to DB2 successfully on Airflow 2.2.5.
Airflow doesn't has specific provider for DB2. You need to use JDBC to connect.
So you need to install some libraries of Airflow and download DB2 JDBC driver to fulfill this task.
List of all libs and provider need to install below:
Apache-airflow-provider-jdbc/Version:2.1.3
JayDeBeApi/Version:latest
JPype1/Version:latest
You also need to download DB2 JDBC drivers. This driver include some packages and license file like:
db2jcc4.jar
sqlj4.zip
jdbc4_LI_en
Because it will use JDBC to connect,openJDK also need to install on your docker.
Setting environment variables JAVA_HOME, PATH and CLASSPATH are needed.
All of above actions can be done on your docker file.
When you have completed to build your image, You can start your airflow to set connection.
Connection URL syntax: jdbc:db2://Host:Port/Database
You can replace HOST,PORT and Database.
Driver Path: You can input where your class file(db2jcc4.jar) located.
Hope these tips can help you. I am New to Airflow too.
I've installed Bareos 20.0.1 on Ubuntu 20.04.3 according to their documentations here.
I'm trying to backup a remote PostgreSQL database and apparently, there are three possible scenarios and the pros of the PostgreSQL Plugin (third solution), makes it the obvious choice.
Following the PostgreSQL Plugin documentations, in the Prerequisites for the PostgreSQL Plugin section, there is a line saying:
The plugin must be installed on the same host where the PostgreSQL database runs.
Now what I'm failing to understand is that, if I'm supposed to install the plugin on my database node, how will the bareos machine and the plugin on the db machine communicate?
Furthermore, I've checked out the source code for this module on their GitHub, and I see that the plugin source code tries to find files locally and that is a proof to the aforementioned statement.
In a desperate act, I tried installing the plugin and its dependencies on the bareos node and I keep getting the error Error: python3-fd-mod: Could not read Label File /var/lib/postgresql/13/main/backup_label which is actually trying to find the backup_label file in the bareos node.
Here is the configuration for my fileset:
FileSet {
Name = "psql"
Include {
Options {
compression=GZIP
signature = MD5
}
Plugin = "python"
":module_path=/usr/lib/bareos/plugins"
":module_name=bareos-fd-postgres"
":postgresDataDir=/var/lib/postgresql/13/main"
":walArchive=/var/lib/postgresql/13/wal_archive/"
":dbHost=DATABASE_DNS"
":dbuser=DATABASE_USER"
}
}
Note that the plugin document specifies the dbHost parameter as:
useful, if socket is not in default location. Specify socket-directory with a leading / here
However, since I'm trying a remote database, I'm using the DNS address of the remote database. I verified the bareos connection to database and made sure the backup_label file gets created while the PostgreSQL backup job runs.
I'll be happy to provide more details if necessary. Appreciate any help or even guesses :-D
The Goal
I need to get data from a MongoDB updated every 15 minutes to use to build into a PowerBI report.
The Gear
I am connected from my windows machine via ssh to an RHEL server (server a). This server is running powerbi connector (SQLD) which is connected to my MongoDB that is running on a different server (server b). I'm also running MySQL on server b. My powerBI connector is installed on server b.
Exactly where I'm at
I am using the steps listed here (and all the associated pages) and have tried everything listed short of writing a config file, as the fact that things are working on mongosqld's end makes me think I don't need it... and if I can't get it working manually, having a config file won't exactly help.
https://docs.mongodb.com/bi-connector/current/connect/powerbi/
Using:
mongosqld --mongo-uri="mongodb://10.xxx.xxx.xx" --auth --mongo-username="ThisGuy" --mongo-password="test"
I successfully map the schema and show an active connection in the command window. I can also access my database from compass using an authorization enabled URL.
When I set up an ODBC connector I use the IP of server a, the user and password from my url, and port 3307. Nothing shows up in the dropdown, when I click 'test' I get the following message:
Connection Failed
[MongoDB][ODBC 1.4(w) Driver]Can't connect to MySQL server4 on '10.xxx.xxx.xxx' (10060)
I have also tried 3306, 27017, and 27015. Just to be safe I also added firewall rules for all traffic on these ports. I've tried this many times, including (just for the hell of it, and I'm kind of new to this stuff) the ip of server b, the ip of my machine, the credentials for MySQL, basically any combination of these things that I can think of.
In powerBI, my odbc driver shows up, and when selected in the dropdown, it asks for a username and password. I have tried both mongo credentials and MySQL. Not sure which I should be using?
regardless, I get the following error inside PowerBI:
Details: "ODBC: ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061)
ERROR [HY000] [MySQL][ODBC 1.4(w) Driver]Can't connect to MySQL server on '10.xxx.xxx.xxx' (10061)"
Thoughts
I don't control either server, although I have root access, being new to this tech and company I am wary of screwing anything up that a co-worker will have to fix. I read in a different SO thread that maybe I need to downgrade the version of MySQL that is running on the server and that it could fix the problem, but I don't think that it will actually help and am afraid I might screw up something else on the server if I do this:
The C Authentication plugin was developed against MySQL 5.7.18 Community Edition (64-bit), and tested with MySQL 5.7.18 Community Edition and the latest version of MongoDB Connector for BI. The plugin is not compatible with MySQL Server or Connector/ODBC driver version 8 and later.
https://dba.stackexchange.com/questions/219550/access-denied-when-connecting-to-mongosqld-with-mysql
Maybe the problem is that server B is listening to server a on port 3307, and that there is another unknown port (not mentioned above) that my ODBC driver must be listening to? I'm not sure how to test for this when you get a step away like this.
So that's it. I'm really stuck and would love some help, I am going to try the downgrade tomorrow if nothing else shakes loose and will keep this thread updated.
Thank you for reading
Good day,
Currently I use MS Access at home for several Databases (for personal use).
At work, I use PostgreSQL, which is infinity times better. I want to start using postgres for my personally used databases, but I don't know where to start.
I've tried reading the documentation, but still don't know how to start. I don't have a server at home; is it possible I can just make a local database/tablespace? Or would I have to host a virtual server?
Note that I am willing to use other open source databases if there is an easy option out there - MS access is just so... terrible.
Thanks,
So, it seems you have Windows at home. You just need to download full installer for PostgreSQL:
http://www.postgresql.org/download/windows/
After installation it will automatically add starting postgres server as a service on local machine. That means, server will always run in background, but you can disable that later, or just uninstall.
After that, you can use pgAdmin (included in default installation package) or other client tools to access the DB engine.
UPD in pgadmin, create connection with this settings:
'localhost' as hostname;
port - 5432;
user, database - postgres (for testing purpose only - you should create your own user and tables with restricted rights later).
Password for postgres (that is DB admin user) must be entered during installation process.
Server settings are stored somewhere here:
"C:\Program Files\PostgreSQL\9.3\data"
pg_hba.conf - Client Authentication Configuration File
postgresql.conf - Configuration File
I am building a Django site on Google Compute Engine, and I want to install my database in SQL Cloud. It is possible?
What is the most common way to do this? Installing MySQL on virtual machine or use a Cloud SQL instance?
Thank you.
You can use either Google Cloud SQL or manage your own SQL database, depending on your needs.
To use Cloud SQL, you'd want to follow the instructions here: https://developers.google.com/cloud-sql/docs/external
If you want to manage your own SQL database, you can install MySQL or some other database on an instance. Depending on your needs, you can start with a g1-small with a fairly large disk attached and then later use a larger instance type to run your database.
If you're running your own database, you'll need to make sure to take regular backups and copy them off the database machine, to someplace like Google Cloud Storage. If you're using Cloud SQL, you can use the console or the API to schedule database backups.
This answer is following up from "Well, the problem is that to use Cloud SQL, I must connect using JDBC. I'm using Python. How I can do?"
I am not from Python world, but I recently connected my Java app on GCE instance to a Cloud-Sql DB (via cloud-sql-proxy approach, as described here: https://cloud.google.com/sql/docs/compute-engine-access) and didn't see any reason why it shouldn't work for Python too.
Here is what I just tried and easily connected my test Python app to a Cloud-Sql DB, via the cloud-sql-proxy:
Step 1: Download and run the proxy on a local port, like below (this establishes a channel between the local port 3306 and the Cloud-SQL database instance identified by the connection name "PROJ_NAME:TIMEZONE:SQL_NAME"):
sudo wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
sudo mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
sudo chmod +x cloud_sql_proxy
sudo ./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306 &
Step 2: Make sure that python-mysqldb is installed
sudo apt-get install python-mysqldb
Steo 3: Ran the following test program to connect to the Cloud-SQL db, via the local socket 3306, setup by the proxy:
import MySQLdb
conn = MySQLdb.connect(host= "127.0.0.1", port=3306, user="root", passwd="my_root_password", db="my_db")
x = conn.cursor()
try:
x.execute("""INSERT INTO Test(test_id) VALUES ('111')""")
conn.commit()
except:
conn.rollback()
conn.close()
Hope it helps.