I have a database which is part of a closed system and the end-user of the system would like me to write some reports using the data contains in a Sybase SQL Anywhere Database. The system doesn't provide the reports that they are looking for, but access to the data is available by connecting to this ASA database.
The vendor of the software would likely prefer I not update the database and I am basically read-only as I am just doing some reporting. All is good, seal is not broken, warranty still intact, etc,etc..
My main problem is that I am using jConnect in order to read from the database, and jConnect requires some "jConnect Routines" to be installed into the database. I've found that I can make this happen by just doing an "Alter Database Upgrade JConnect On", but I just don't fully understand what this does and if there is any risks associated with it.
So, my question is does anyone know exactly what jConnect routines are and how are they used? Is there any risk adding these to a database? Should I be worried about this?
If the vendor wants you to write reports using jConnect they will have to allow the installation of the JConnect tables.
These are quite safe, where I work the DBA team install these as a matter of course and we run huge databases in production with no impact.
There is an alternative driver that you could use called jTDS. Its open source and supports MS SQL Server and Sybase. I'm not sure if they require the JConnect tables or not.
I think that the additional tables are a bit of anachronism in this day and age.
Looking at ASA 10 docs, there is another driver: the iAnywhere JDBC driver which seems to be going through the ODBC driver, and as such, probably will not require an alteration of the database.
On the other hand, installing the "jConnect system objects" is done by running the script scrits/jcatalog.sql... You can show it the DBAs, if you want to reassure them. It creates some procedures, tables, variables.
The need for this script probably comes from the fact that jConnect talks to both ASE (Sybase) and iAnywhere databases, so it needs a compatibility layer installed in the database...
Related
as suggested here's a TL;DR bit. I'm looking for an alternative to temporary tables I could use on a Hot Standby copy of a database. Is there anything or do I have to re-write everything and try and do it all in subqueries?
When I joined our company last year, our ERP was hosted locally and although I didn’t have admin access to the Postgres database I at least had read write access to the tables.
I wrote a number of reports (using SQL Command option in Crystal Reports)/SQL scripts that use temporary tables however, we’ve just migrated to a hosted version of the ERP and rather than access the live database we have been given access to a Hot Standby copy, mainly due to load balancing issues.
Unfortunately, the software company didn’t warn us that this would be the case or that it would be read only access. I found this out when I was testing some scripts when obviously anything with a temporary table failed.
I use temporary tables for things like storing dates and bank holiday information, holding temporary calculations and so on.
So I'm looking for an alternative to temporary tables I could use on the Hot Standby copy, or do I have to re-write everything and try and do it all in subqueries?
I’ve looked at using CTE (WITH) but the scope is far too small as I’d need access throughout the script.
Then I thought maybe I could read the data from the Hot Standby but create temporary tables in a different database/schema, but I don’t think that’s viable. If it is I might have to speak to the software house to be given access to another database/schema. postgres_fdw would seem the most likely candidate as you can update the external table but I can't see anywhere about dropping and creating tables.
I’ve only been using Postgres since last July having previously used MSSQL which I could probably have used a table variable, but I can’t find an equivalent for that.
I've tried looking at the Postgres documentation but, to some embarrassment, I do find a lot of documentation hard to follow without relatable examples, so I might well have missed something.
Sorry for the long post!
Thanks
We would like to mirror data which is inside SAP to an external database.
Up to now there is a script which exports the data every night.
The customer wants this to happen more often. It should happen every hour.
The export is quite big, and we search for a better way to mirror data which is inside SAP to an external database.
Based on the tag, I assume that your external database is a PostgreSQL database. In this case, I don't think you will really find a pure SAP, database independent solution.
The standard solution for this sort of replication is the SAP SLT Server. It supports taking data out of your SAP system to either a SAP target or a non-SAP target. Currently it supports the following non-SAP targets:
DB2
SAP MaxDB
Microsoft SQL Server
Oracle
Sybase ASE
As you can see, PostgreSQL is not included in there (yet). In conclusion, I see the following possibilities:
Use SLT in combination with some other external DB that is supported.
Use a third party replication tool like for example SymmetricDS.
Depending on your source database, you might be able to use some database specific tools (e.g. SAP HANA Smart Data Integration).
Write some custom code for doing it. In my opinion, you should try to build a sort of log table in this case, to record (using maybe triggers) which rows were inserted / updated / deleted since the last replication. IMO, this should be really a last resort, as database replication is a fairly common topic and you should not reinvent the wheel.
I've upgraded a server from SQL Server 2005 to SQL Server 2008 but the database runs slower when running certain stored procedures especially against records which contain more data than others.
It's been suggested that I run a basic reindex to see if this resolves.
Can someone take a look at the screenshot and advise if this will remove any data from my database - if so then this isn't the right thing to do.
Thanks James
p.s I will now attach a screen-shot if I can as not done that before using this Forum
Those actions won't remove any data from the database, but generally I wouldn't advise trying to shrink the database unless you really need the space as this can cause more fragmentation of indexes. The only options that you have ticked there that have the ability to improve performance are the rebuild/reorganise indexes and the update statistics options.
Rather than maintenance plans though I would generally recommend using Ola Hallengren's DB maintenance scripts though as they offer more flexibility and are generally a lot better than these plans:
Ola Hallengren - SQL Server Maintenance Solution
Sorry for potential FAQ, RTFM, etc. If I understand correctly, transactions could not be used in native scripting units (functions, including anonymous do-blocks). What would PostgreSQL guys recommend as the least "not natural" way to combine scripting and transactions?
I think you are talking about autonomous transactions.
If so, you are correct that PostgreSQL doesn't support true stored procedures with autonomous transactions yet. (Feel free to sponsor work or contribute time...)
Your options are:
Use dblink to make connections-to-self and do the discrete units of work that way
Use an external process that connects to Pg
Use an in-db script with pl/python, pl/perl, etc that connects to the DB using psycopg2 / DBD::Pg / etc, rather than using the SPI, and does the work that way. Essentially you code the script like an externally connecting script, but run it within the DB for convenience.
I have a database in PostgreSQL with millions of records and I have to develop a website that will use this database using Entity Framework (using dotnetConnect for PostgreSQL driver in case of PostgreSQL database).
Since SQL Server and .Net are both native to the Windows platform, should I migrate the database from PostgreSQL to SQL Server 2008 R2 for performance reasons?
I have read some blogs comparing the two RDBMS' but I am still confused about which system I should use.
There is no clear answer here, as its subjective, however this is what I would consider:
The overhead of learning a new DBMS and its tools.
The SQL dialects each RDBMS uses and if you are using that dialect currently.
The cost (monetary and time) required to migrate from PostgreSQL to another RDBMS
Do you or your client have an ongoing budget for the new RDBMS? If not, don't make the mistake of developing an application to use a RDBMS that will never see the light of day.
Personally if your current database is working well I wouldn't change. Why fix what isn't broke?
You need to find out if there is actually a problem, and if moving to SQL Server will fix it before doing any application changes.
Start by ignoring the fact you've got .net and using entity framework. Look at the queries that your web application is going to make, and try them directly against the database. See if its returning the information quick enough.
Only if, after you've tuned indexes etc. you can't make the answers come back in a time you're happy with should you decide the database is a problem. At that point it makes sense to try the same tests against a SQL Server database, but don't just assume SQL Server is going to be faster. You might find out that neither can do what you need, and you need to use faster disks or more memory etc.
The mechanism you're using to talk to a database (DotConnect or Microsoft drivers) will likely be a very minor performance consideration, considering the amount of information flowing (SQL statements in one direction and result sets in the other) is going to be almost identical for both technologies.