What is the format of connection string that should be used while using ERWin tool for generating ERD for an Oracle database? - oracle10g

I have been trying to generate an ERD for some oracle database. While I am doing this via 'Actions'->'Reverse Engineering' option, I get a section that asks me for a connection string. But I am unsure of the format about how we can specify the database and its details.
Could someone please help me with this?
Thanks
Pradeep

I am using Erwin 7.3.8 to connect on a Oracle 11g schema. The connection works from me when I use the oracle tnsnames string format:
For example:
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = **database-server**)(PORT = **database-server-port**))(CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = **service-name-if-exists**)))
Copy as a one line command and paste in the Connection String field.
Cheers!

Related

How to use pyodbc to migrate tables from MS Access to Postgres?

I need to migrate tables from MS Access to Postgres. I'd like to use pyodbc to do this as it allows me to connect to the Access database using python and query the data.
The problem I have is I'm not exactly sure how to programmatically create a table with the same schema other than just creating a SQL statement using string formatting. pyodbc provides the ability to list all of the fields, field types and field lengths, so I can create a long SQL statement with all of the relevant information, however how can I do this for a bunch of tables? would I need to build SQL string statements for each table?
import pyodbc
access_conn_str = (r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)}; 'r'DBQ=C:\Users\bob\access_database.accdb;')
access_conn = pyodbc.connect(access_conn_str)
access_cursor = access_conn.cursor()
postgres_conn_str = ("DRIVER={PostgreSQL Unicode};""DATABASE=access_database;""UID=user;""PWD=password;""SERVER=localhost;""PORT=5433;")
postgres_conn = pyodbc.connect(postgres_conn_str)
postgres_cursor = postgres_conn.cursor()
table_ditc = {}
row_dict = {}
for row in access_cursor.columns(table='table1'):
row_dict[row.column_name] = [row.type_name, row.column_size]
table_ditc['table1'] = row_dict
for table, values in table_ditc.items():
print(f"Creating table for {table}")
access_cursor.execute(f'SELECT * FROM {table}')
result = access_cursor.fetchall()
postgres_cursor.execute(f'''CREATE TABLE {table} (Do I just put a bunch of string formatting in here?);''')
postgres_cursor.executemany(f'INSERT INTO {table} (Do I just put a bunch of string formatting) VALUES (string formatting?)', result)
postgres_conn.commit()
As you can see, with pyodbc I'm not exactly sure how to build the SQL statements. I know I could build a long string by hand, but if I were doing a bunch of different tables, with different fields etc. that would not be realistic. Is there a better, easier way to create the table and insert rows based off of the schema of the Access database?
I ultimately ended up using a combination of pyodbc and pywin32. pywin32 is "basically a very thin wrapper of python that allows us to interact with COM objects and automate Windows applications with python" (quoted from second link below).
I was able to programmatically interact with Access and export the tables directly to Postgres with DoCmd.TransferDatabase
https://learn.microsoft.com/en-us/office/vba/api/access.docmd.transferdatabase
https://pbpython.com/windows-com.html
import win32com.client
import pyodbc
import logging
from pathlib import Path
conn_str = (r'DRIVER={Microsoft Access Driver (*.mdb, *.accdb)}; 'rf'DBQ={access_database_location};')
conn = pyodbc.connect(conn_str)
cursor = conn.cursor()
a = win32com.client.Dispatch("Access.Application")
a.OpenCurrentDatabase(access_database_location)
table_list = []
for table_info in cursor.tables(tableType='TABLE'):
table_list.append(table_info.table_name)
for table in table_list:
logging.info(f"Exporting: {table}")
acExport = 1
acTable = 0
db_name = Path(access_database_location).stem.lower()
a.DoCmd.TransferDatabase(acExport, "ODBC Database", "ODBC;DRIVER={PostgreSQL Unicode};"f"DATABASE={db_name};"f"UID={pg_user};"f"PWD={pg_pwd};""SERVER=localhost;"f"PORT={pg_port};", acTable, f"{table}", f"{table.lower()}_export_from_access")
logging.info(f"Finished Export of Table: {table}")
logging.info("Creating empty table in EGDB based off of this")
This approach seems to be working for me. I like how the creation of the table/fields as well as insertion of data is all handled automatically (which was the original problem I was having with pyodbc).
If anyone has better approaches I'm open to suggestions.

Can we read data from CLOB or TEXT data type column in EnterpriseDB 9.2

I am using EnterpriseDB 9.2 advanced server (an Oracle compatible PostgreSQL fork) and I want to read data from a clob or text type data type column using getClob() method.
I'm getting error when trying:
org.postgresql.util.PSQLException: Bad value for type long :
adminuser#domainUser Logged In
Sucessfully at
org.postgresql.jdbc2.AbstractJdbc2ResultSet.toLong(AbstractJdbc2ResultSet.java:2‌​971)
at
org.postgresql.jdbc2.AbstractJdbc2ResultSet.getLong(AbstractJdbc2ResultSet.java:‌​2163)
at
org.postgresql.jdbc2.AbstractJdbc2ResultSet.getClob(AbstractJdbc2ResultSet.java:‌​436)
So is it possible to read data from above mentioned scenario using any technique in postgresql?
In a case of using Hibernate you can define the entity as shown below. The solution works well at least on PostgreSQL, H2 and HSQLDB (I didn't check other DBs):
#Column(columnDefinition = "CLOB")
String myClobField;

how to set schema in ado.net after open the connection

I am connecting to the db2 database in this database different schemas are there.
I want to connect to connect particular schema only,
I tried that in connection string we cant give the schema,
After connection opening only we have to set the schema,
I have a code that i.e by using connect to the active data object(ADO) only,
but in ado.net how to give i dont know
Below is the code for ado connection
db.Open DBcon_string
db.Execute ("SET SCHEMA=" & AppSchema)
db.Execute ("SET PATH=""SYSIBM"",""SYSFUN"",""SYSPROC"",""SYSIBMADM"",""" & AppSchema & """")
Note: db is adodb.connection
Replace AppSchema with ‘ETWRMS’
The below Link may prove to be helpful-
http://msdn.microsoft.com/en-us/library/ms971481.aspx
Refer the following link "Using an ADO.NET Entity Framework Data Provider"
http://www.datadirect.com/download/eval_docs/dotnet_win_quickstart.htm
Similar kinda issue taht may help-
How to see the schema of a db2 table (file)
It would be similar in ADO.NET. Assuming you're using OleDbConnection - create and open it. Then create an OleDbCommand using that connection. Then using command's ExecuteNonQuery method issue same statements you did using old "db.Execute" method.
Please try-
db.Open DBcon_string
db.ExecuteNonQuery ("SET SCHEMA=" & AppSchema)
db.ExecuteNonQuery ("SET PATH=""SYSIBM"",""SYSFUN"",""SYSPROC"",""SYSIBMADM"",""" & AppSchema & """")

Running Jasper Reports against an in-memory h2 datasource?

I'm trying to run jasper reports against a live and reporting database, but any reports run against the live database throw exceptions about not finding the right tables (although the default PUBLIC schema is found). It looks like the main DataSource connection isn't honoring the H2 connection settings which specify IGNORECASE=true, as the generated columns and tables are capitalized, by my queries are not.
DataSource.groovy dataSource:
dataSource {
hibernate {
cache.use_second_level_cache = false
cache.use_query_cache = false
}
dbCreate = "create-drop" // one of 'create', 'create-drop','update'
pooled = true
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
url = "jdbc:h2:mem:testDb;MODE=PostgreSQL;IGNORECASE=TRUE;DATABASE_TO_UPPER=false"
jndiName = null
dialect = null
}
Datasources.groovy dataSource:
datasource(name: 'reporting') {
environments(['development', 'test'])
domainClasses([SomeClass])
readOnly(false)
driverClassName('org.h2.Driver')
url('jdbc:h2:mem:testReportingDb;MODE=PostgreSQL;IGNORECASE=TRUE;DATABASE_TO_UPPER=false')
username('sa')
password('')
dbCreate('create-drop')
logSql(false)
dialect(null)
pooled(true)
hibernate {
cache {
use_second_level_cache(false)
use_query_cache(false)
}
}
}
What fails:
JasperPrint print = JasperFillManager.fillReport(compiledReport, params,dataSource.getConnection())
While debugging, the only difference I've found is that the live dataSource, when injected or looked up with DatasourcesUtils.getDataSource(null), is a TransactionAwareDatasourceProxy, and DatasourcesUtils.getDataSource('reporting') is a BasicDataSource
What do I need to do for Jasper to operate on the active in-memory H2 database?
This failure is not reproducible against a real postgres database.
Probably you are opening a different database. Using the database URL jdbc:h2:mem:testDb will open an in-memory database within the same process and class loader.
Did you try already using a regular persistent database, using the database URL jdbc:h2:~/testDb?
To use open an in-memory database that is running in a different process or class loader, you need to use the server mode. That means, you need to start a server where the database is running, and connect to it using jdbc:h2:tcp://localhost/mem:testDb.
See also the database URL overview.
H2 doesn't currently support case-insensitive identifiers (table names, column names). I know other databases support it, but currently H2 uses regular java.util.HashMap<String, ..> for metadata, and that's case sensitive (whether or not IGNORECASE is used).
In this case, the identifier names are case-sensitive. I tried with the database URL jdbc:h2:mem:testReportingDb;MODE=PostgreSQL;IGNORECASE=TRUE;DATABASE_TO_UPPER=false using the H2 Console:
DROP TABLE IF EXISTS UPPER;
DROP TABLE IF EXISTS lower;
CREATE TABLE UPPER(NAME VARCHAR(255));
CREATE TABLE lower(name VARCHAR(255));
-- ok:
SELECT * FROM UPPER;
SELECT * FROM lower;
-- fail (table not found):
SELECT * FROM upper;
SELECT * FROM LOWER;
So, the question is: when creating the tables, were they created with uppercase identifiers or a different database URL? Is it possible to change that? If not: is it possible to use a different database URL?
Just don't run reports against in-memory datasources, and this won't be an issue.

How to connect Excel to MS SQL and get data WITH column names?

One of my users wants to get data into Excel from SQL 2008 query/stored proc.
I never actually did it before.
I tried a sample using ADO and got data but user reasonably asked - where are the column names?
How do I connect a spreadsheet to an SQL resultset and get it with column names?
Apparently the field names are in the recordset object already.. just needed to pull them out.
i = 1
For Each objField In rs.Fields
Sheet1.Cells(1, i) = objField.Name
i = i + 1
Next objField
I don't know which version of Excel you are using but in Excel 2007 you can just connect to the SQL DB by going to Data -> From Other Sources -> From SQL Server. After you select your server and database, your connection will be created. Then you can edit it (Data -> Connections -> Properties) where in the Definition tab you change the Command type to SQL and enter your query in the Command text box. You can also create a view on the server and just point to that from Excel.
This should do it unless I misunderstood your question.