Loopback (DB2) - Can not create an instance of PersistedModel that uses a schema other than the userid - db2

I am trying to define a model that is based on the PersistedModel to access a table in DB2, call it MY_SCHEMA.MY_TABLE.
I created the model MY_TABLE, based on PersistedModel, with a Data Source (datasources.json) where the definition includes the attribute "schema": "MY_SCHEMA". The data source also contains the userid my_userid, used for the connection.
Current Behavior
When I try to call the API for this model, it tries to access the table my_userid.MY_TABLE.
Expected Behavior
It should access MY_SCHEMA.MY_TABLE.
The DB2 instance happens to be on a System Z. I have created a table called my_userid.MY_TABLE and that will work, however for the solution we are trying to build, there are multiple schemas required.
Note that this only appears to be an issue with Db2 on System Z. I can change schemas on Db2 LUW.

What LoopBack connector are you using? What version? Can you also check what version of loopback-ibmdb is installed in your node_modules folder?
AFAICT, LoopBack's DB2-related connectors support schema field, see https://github.com/strongloop/loopback-ibmdb/blob/master/lib/ibmdb.js#L96-L100
self.schema = this.username;
if (settings.schema) {
self.schema = settings.schema.toUpperCase();
}
self.connStr += ';CurrentSchema=' + self.schema;
Have you considered configuring the database connection using DSN instead of individual fields like hostname and username?
In your datasource config JSON:
"dsn": "DATABASE={dbname};HOSTNAME={hostname};UID={username};PWD={password};CurrentSchema=MY_SCHEMA"

Related

matlab error when connecting to postgresql database

I'm trying to connec to a PostgreSQL database with following command:
connection = database( ...
options.getDatabaseName(), ...
options.getUsername(), ...
options.getPassword(), ...
"org.postgresql.Driver", ...
"jdbc:postgresql://" + options.getHostname() + ":" + options.getPort() + "/" + options.getDatabaseName() ...
);
It returns me following error:
Error using database (line 59)
Unmatched parameter name 'org.postgresql.Driver' must be a string scalar or character vector that can represent a field name.
I've seen other questions about that, like this one but the error message is different.
What I'm doing wrong?
I've found the solution by myself, and it's tricky (maybe related to a bug in my opinion).
In order to test the database connection I've created first a connection with the Database explorer. It worked, and I saved this connection using the same name of the database.
When I use the database command, by inspecting it source code I've seen that the first thing that it does it to check if there's an existing data source with that name and, if not, it search for the database. The problem was that since my connection had the same database name, database supposed that I wanted to use the data source command version instead of the database. It tried to use this command:
conn = database(datasource,username,password)
instead of this one:
conn = database(databasename,username,password,driver,url)
since wtrade is both name of the database and of the data source. In that case the fourth argument, driver, must be a parameter name, like "Vendor" of "PortNumber", as per Matlab documentation, so since the driver string does not match a parameter name, I had the error.
I've removed the datasource with the same name of the database and everything started to work.
I've notified this to MathWorks, since in my opinion there should be no problem if a database has the same name of a datasource, since the signature are different, so database command should handle also this case.

How to get the servername\hostname in Firebird 2.5.x

I use Firebird 2.5, and I want to retrieve the following values
Username:
I used SELECT rdb$get_context('SYSTEM', 'CURRENT_USER') FROM ...
Database name:
I used SELECT rdb$get_context('SYSTEM', 'DB_NAME') FROM ...
But for server name, I did not find any client API, do you know how can I retrieve the server name with a SELECT statement.
There is nothing built-in in Firebird to obtain the server host name (or host names, as a server can have multiple host names) through SQL.
The closest you can get is by requesting the isc_info_firebird_version information item through the isc_database_info API function. This returns version information that - if connecting through TCP/IP - includes a host name of the server.
But as your primary reason for this is to discern between environments in SQL, it might be better to look for a different solution. Some ideas:
Use an external table
You can create an external table to contain the environment information you need
In this example I just put in a short, fixed width name for the environment types, but you could include other information, just be aware the external table format is a binary format. When using CHAR it will look like a fixed width format, where values shorter than declared need to be padded with spaces.
You can follow these steps:
Configure ExternalFileAccess in firebird.conf (for this example, you'd need to set ExternalFileAccess = Restrict D:\data\DB\exttables).
You can then create a table as
create table environment
external file 'D:\data\DB\exttables\environment.dat' (
environment_type CHAR(3) CHARACTER SET ASCII NOT NULL
)
Next, create the file D:\data\DB\exttables\environment.dat and populate it with exactly three characters (eg TST for test, PRO for production, etc). You can also insert the value instead, the external table file will be created automatically. Inserting might be simpler if you want more columns, or data with varying length, etc. Just keep in mind it is binary, but using CHAR for all columns will make it look like text.
Do this for each environment, and make sure the file is read-only to avoid accidental inserts.
After this is done, you can obtain the environment information using
select environment_type
from environment
You can share the same file for multiple databases on the same host, and external files are - by default - not included in a gbak backup (they are only included if you apply the -convert backup option), so this would allow moving database between environments without dragging this information along.
Use an UDF or UDR
You can write an UDF (User-Defined Function) or UDR (User Defined Routine) in a suitable programming language to return the information you want and define this function in your database. Firebird can then call this function from SQL.
UDFs are considered deprecated, and you should use UDRs - introduced in Firebird 3 - instead if possible.
I have never written an UDF or UDR myself, so I can't describe it in detail.

Using Script Task to create ADO NET (ODBC) Data Flow Source

I need some help with a SSIS Script Task (SQL 2008 R2) that dynamically creates a package. I am refining a package that copies data from a Sage Timberline (Now rebranded to Sage 300) Pervasive SQL environment to a SQL server data warehouse. I can create a package that opens the connection to Timberline and copies the data to a table in SQL Server. The problem is, for each company in timberline and each table in SQL, I need to create a separate data flow task. Given the three Timberline company folders and the number of tables in each folder, this would take a lot of time to create and be cumbersome to maintain and troubleshoot.
I am trying to create a package that uses a Foreach Loop to create a package that creates a ADO/ODBC source (Timberline), a OLE destination (SQL) and dynamically handles the column mapping. I found code here that almost does what I need.
I tested this code and it works great using OLE SQL source and destinations. What makes this script work is that it dynamically handles the column mapping. So, it you placed it into a Foreach Loop of the 100 or so tables, with each loop it could dynamically create the data flow and map the columns, then execute the new package.
My problem is that I can only connect to Timberline using ODBC. So, I need to modify the script to create the source connection with ADO NET (ODBC) instead of OLE. I’m having a lot of trouble trying to figure this out. Could someone please help me out with this?
Here the other couple of things I tried first, other than this approach:
Solution: Setup a Linked server to Timberline Pervasive SQL
Problem: SQL server is 64-bit and the Timberline driver is 32-bit. Using a linked server returns a architecture mismatch error. I called Sage and they said they have no plans to release a 64-bit drive.
Solution: Use one of the SQL Transfer tasks
Problem: Only works with SQL databases. This source is a Pervasive SQL database
Solution: Use a “INSERT … INTO …” type script
Problem: This requires a linked server. See the problem above
Here’s the section of the original VB .NET code I need help with:
'To Create a package named [Sample Package]
Dim package As New Package()
package.Name = "Sample Package"
package.PackageType = DTSPackageType.DTSDesigner100
package.VersionBuild = 1
'To add Connection Manager to the package
'For source database (OLTP)
Dim OLTP As ConnectionManager = package.Connections.Add("OLEDB")
OLTP.ConnectionString = "Data Source=.;Initial Catalog=OLTP;Provider=SQLNCLI10;Integrated Security=SSPI;Auto Translate=False;"
OLTP.Name = "LocalHost.OLTP"
'To add Load Employee Dim to the package [Data Flow Task]
Dim dataFlowTaskHost As TaskHost = DirectCast(package.Executables.Add("SSIS.Pipeline.2"), TaskHost)
dataFlowTaskHost.Name = "Load Employee Dim"
dataFlowTaskHost.FailPackageOnFailure = True
dataFlowTaskHost.FailParentOnFailure = True
dataFlowTaskHost.DelayValidation = False
dataFlowTaskHost.Description = "Data Flow Task"
'-----------Data Flow Inner component starts----------------
Dim dataFlowTask As MainPipe = TryCast(dataFlowTaskHost.InnerObject, MainPipe)
' Source OLE DB connection manager to the package.
Dim SconMgr As ConnectionManager = package.Connections("LocalHost.OLTP")
' Create and configure an OLE DB source component.
Dim source As IDTSComponentMetaData100 = dataFlowTask.ComponentMetaDataCollection.[New]()
source.ComponentClassID = "DTSAdapter.OLEDBSource.2"
' Create the design-time instance of the source.
Dim srcDesignTime As CManagedComponentWrapper = source.Instantiate()
' The ProvideComponentProperties method creates a default output.
srcDesignTime.ProvideComponentProperties()
source.Name = "Employee Dim from OLTP"
' Assign the connection manager.
source.RuntimeConnectionCollection(0).ConnectionManagerID = SconMgr.ID
source.RuntimeConnectionCollection(0).ConnectionManager = DtsConvert.GetExtendedInterface(SconMgr)
' Set the custom properties of the source.
srcDesignTime.SetComponentProperty("AccessMode", 0)
' Mode 0 : OpenRowset / Table - View
srcDesignTime.SetComponentProperty("OpenRowset", "[dbo].[Employee_Dim]")
' Connect to the data source, and then update the metadata for the source.
srcDesignTime.AcquireConnections(Nothing)
srcDesignTime.ReinitializeMetaData()
srcDesignTime.ReleaseConnections()
Thanks in advance!
The C# code here is what you need if you need a Derived Column transform between the Source and Destination...
http://bifuture.blogspot.com/2011/01/ssis-adding-derived-column-to-ssis.html
To get the Source & Destination connections working, there is some secret sauce here to get things working between COM and .Net...
http://blogs.msdn.com/b/mattm/archive/2008/12/30/api-sample-ado-net-source.aspx
There is a similar page showing what to do for OleDB connections too.
Creating the source tables is easy. The available ODBC Metadata collections accessible should be retrieved with GetSchema("MetaDataCollections"). This will return a list of the available schema collections available for that particular ODBC driver.
Next, you'll want to see the data types returned from GetSchema("DataTypes"), so you can correctly interpret the data types for each column retrieved from GetSchema("Columns") to make your SQL Server create table script (which I'm assuming you've done).
To at least figure out which tables have primary keys, you'll need to loop over each table returned from GetSchema("Tables") in order to work with GetSchema("Indexes"). There's a bug that requires you to query the Indexes one table at a time. It is easy to google this - create a string array to pass in as the 3rd parameter: GetSchema("Indexes", tblName, resultArray[])
What I did was got the Tables and Columns collections into object variables in my parent SSIS package. Because Timberline is so fast (not), it seemed more efficient to pull all the columns down and filter them locally...which I do to create the tables in SQL Server, if necessary.
Once that is done, use the local copy of Tables again to manipulate a SSIS package in a Script task in "design mode" (change source and destination target tables, and redo the column mappings), and execute the now-in-memory SSIS package.
For me it took awhile to figure out. Both above URLs were required. I found and copied the .Net 2.0 Dts.PipelineWrap and Dts.RuntimeWrap .dlls to Microsoft.Net\FrameworkV2.0xxxxx folder, then referenced these in each script task wanting to use them, before setting up my "using DtsPW = Microsoft.SqlServer.Dts.Pipeline.Wrapper", etc.
Of note, because Timberline is 32-bit ODBC, I think it's necessary to build the SSIS package to use "X86", and target the script tasks to use .Net 2.0 framework.
I used the Derived Column code because I needed to copy multiple Timberline DBs into one SQL Server DB. Derived Column adds a "CompanyID" value to the output pipeline to SQL Server.
In the end, map the Destination's Virtual Input columns to its External Metadata columns, based off of the pipeline the Destination is attached to:
foreach (DtsPW.IDTSVirtualInputColumn100 vColumn in destVirtInput.VirtualInputColumnCollection)
{
var vCol = destInst.SetUsageType(destInput.ID, destVirtInput, vColumn.LineageID, DtsPW.DTSUsageType.UT_READWRITE);
destInst.MapInputColumn(destInput.ID, vCol.ID, destInput.ExternalMetadataColumnCollection[vColumn.Name].ID);
}
Anyways, that code will make more sense in the context of the bifuture.blogspot.com page.
The EzApi library could help with this too, but the AdoNet connection source for it is coded as a virtual class, so you'd need to implement specific classes to use. My C# kungfu is not strong enough for that in the time I have...
Also, CozyRoc sells a toolset with custom SSIS controls (data flow Source and Destination controls...) that looks like it does this on the fly input-to-output column mapping as well.
My package seems to work good enough now... Oh, and one more, I did not have luck trying to use DSN-less ODBC connections to Timberline, just: Dsn=dsnname;Uid=user;Pwd=pwd;
SSIS packages running in 64-bit land cannot see 32-bit DSNs on 64-bit OS, it seems...at least, it didn't work for me (win7-64, 32-bit Text ODBC DSN).

Running Jasper Reports against an in-memory h2 datasource?

I'm trying to run jasper reports against a live and reporting database, but any reports run against the live database throw exceptions about not finding the right tables (although the default PUBLIC schema is found). It looks like the main DataSource connection isn't honoring the H2 connection settings which specify IGNORECASE=true, as the generated columns and tables are capitalized, by my queries are not.
DataSource.groovy dataSource:
dataSource {
hibernate {
cache.use_second_level_cache = false
cache.use_query_cache = false
}
dbCreate = "create-drop" // one of 'create', 'create-drop','update'
pooled = true
driverClassName = "org.h2.Driver"
username = "sa"
password = ""
url = "jdbc:h2:mem:testDb;MODE=PostgreSQL;IGNORECASE=TRUE;DATABASE_TO_UPPER=false"
jndiName = null
dialect = null
}
Datasources.groovy dataSource:
datasource(name: 'reporting') {
environments(['development', 'test'])
domainClasses([SomeClass])
readOnly(false)
driverClassName('org.h2.Driver')
url('jdbc:h2:mem:testReportingDb;MODE=PostgreSQL;IGNORECASE=TRUE;DATABASE_TO_UPPER=false')
username('sa')
password('')
dbCreate('create-drop')
logSql(false)
dialect(null)
pooled(true)
hibernate {
cache {
use_second_level_cache(false)
use_query_cache(false)
}
}
}
What fails:
JasperPrint print = JasperFillManager.fillReport(compiledReport, params,dataSource.getConnection())
While debugging, the only difference I've found is that the live dataSource, when injected or looked up with DatasourcesUtils.getDataSource(null), is a TransactionAwareDatasourceProxy, and DatasourcesUtils.getDataSource('reporting') is a BasicDataSource
What do I need to do for Jasper to operate on the active in-memory H2 database?
This failure is not reproducible against a real postgres database.
Probably you are opening a different database. Using the database URL jdbc:h2:mem:testDb will open an in-memory database within the same process and class loader.
Did you try already using a regular persistent database, using the database URL jdbc:h2:~/testDb?
To use open an in-memory database that is running in a different process or class loader, you need to use the server mode. That means, you need to start a server where the database is running, and connect to it using jdbc:h2:tcp://localhost/mem:testDb.
See also the database URL overview.
H2 doesn't currently support case-insensitive identifiers (table names, column names). I know other databases support it, but currently H2 uses regular java.util.HashMap<String, ..> for metadata, and that's case sensitive (whether or not IGNORECASE is used).
In this case, the identifier names are case-sensitive. I tried with the database URL jdbc:h2:mem:testReportingDb;MODE=PostgreSQL;IGNORECASE=TRUE;DATABASE_TO_UPPER=false using the H2 Console:
DROP TABLE IF EXISTS UPPER;
DROP TABLE IF EXISTS lower;
CREATE TABLE UPPER(NAME VARCHAR(255));
CREATE TABLE lower(name VARCHAR(255));
-- ok:
SELECT * FROM UPPER;
SELECT * FROM lower;
-- fail (table not found):
SELECT * FROM upper;
SELECT * FROM LOWER;
So, the question is: when creating the tables, were they created with uppercase identifiers or a different database URL? Is it possible to change that? If not: is it possible to use a different database URL?
Just don't run reports against in-memory datasources, and this won't be an issue.

T-SQL 2000: Four part table name

I don't usually work with linked servers, and so I'm not sure what I'm doing wrong here.
A query like this will work to a linked foxpro server from sql 2000:
EXEC('Select * from openquery(linkedServer, ''select * from linkedTable'')')
However, from researching on the internet, something like this should also work:
Select * from linkedserver...linkedtable
but I receive this error:
Server: Msg 7313, Level 16, State 1, Line 1
Invalid schema or catalog specified for provider 'MSDASQL'.
OLE DB error trace [Non-interface error: Invalid schema or catalog specified for the provider.].
I realize it's supposed to be ServerAlias.Category.Schema.TableName, but if I run sp_ tables _ex on the linked server, for the category for all tables I just get the network path to where the data files are, and the schema is null.
Is this server setup incorrectly? Or is what I'm trying to do not possible?
From MSDN:
Always use fully qualified names when
working with objects on linked
servers. There is no support for
implicit resolution to the dbo owner
name for tables in linked servers
You cannot rely on the implicit schema name resolution of the '..' notation for linked servers. For a FoxPro 'server' you're going to have to use the database and schema as they map to their FoxPro counterparts in the driver you use (I think they map to folder and file name, but I have't use a ISAM file driver in more than 10 years now).
I think you need to be explicit about resources in the linked server part of the query, for example:
EXEC SomeLinkedServer.Database.dbo.SomeStoredProc
In other words just dotting them out doesn't work in this case, you have to be more specific.
It's actually:
ServerAlias.Catalog.Schema.LinkedTable
Catalog is the database that you're querying on the linked server, and catalog is the catalog of the remote table. So a valid four-part name would look lik this
ServerAlias.AdventureWorks.HumanResources.Employee
or
ServerAlias.MyDB.dbo.MyTable