Is it possible to query the linked table manager in Access 2003? - ms-access-2003

I have a directory of databases (*.mdb)s which are linked to several other *.mdb in other locations.
We've split the original database from one file into two partitions. The databases in the directory point to the original database file (and a few other databases as well). Now I need to re-link the tables from each database in the directory to the correct partition of the original (now split) database.
I've been going through manually and re-linking the tables in each database's Linked Table Manager, but this is grossly inefficient, and if I could just query the Linked Table Manager somehow, I could easily find out if I have changed the correct number of tables.
Is there any way to query the Linked Table manager? Through VB or even using the system tables and SQL using table name and file location?
Note I am opening the files in MS Access 2003, but MS Access 2003 is opening them and reporting Access 2000 format.
Per Remou's suggestion here is some code I wrote to relink the tables:
Sub RelinkLinks()
Dim db As Database
Dim tdf As TableDef
Dim OldLoc As String
OldLoc = ";DATABASE=\\lois\_DB\DB\BE.mdb"
Dim partition1Loc As String
Dim partition2Loc As String
partition1Loc = ";DATABASE=\\lois\_DB\DB\Partition 1\BE.mdb"
partition2Loc = ";DATABASE=\\lois\_DB\DB\Partition 2\BE.mdb"
Set db = CurrentDb
For Each tdf In db.TableDefs
' Only cycle through the locations
' that are equal to the old one...
If tdf.Connect = OldLoc Then
' Debug.Print tdf.Name & ":" & tdf.Connect
Dim whichLoc As String
If tdf.Name = "T_PAR_2_TBL_1" Or tdf.Name = "T_PAR_2_TBL_2" Then
' Only set the tables to partition 2 that were listed as in Partition 2 by the database splitter
Debug.Print "Setting linked table " & tdf.Name & " FROM " & tdf.Connect & " TO " & partition2Loc
whichLoc = partition2Loc
Else
' If the name was not listed as in partition 2, set it to partition 1
Debug.Print "Setting linked table " & tdf.Name & " FROM " & tdf.Connect & " TO " & partition1Loc
whichLoc = partition1Loc
End If
'We will uncomment this when we take the safety off...
'tdf.Connect = whichLoc
'tdf.RefreshLink
End If
Next tdf
End Sub

You can change and update links by referring to the TableDefs through VBA.
Set db = CurrentDB
For Each tdf In db.Tabledefs
If tdf.Connect <> Myconnect Then ''the connect property is a string
tdf.Connect = MyConnect
tdf.RefreshLink
End If
You can also link all tables in an external database with VBA by using CreateTableDef. I generally find it useful to keep a table of tables I want to link and use that.

Related

LONG VARCHAR - Read from table to Front (C#) then INSERT / UPDATE value to a table

I'm reading table info from a SqlBase DB using a DatAdapter.Fill in C#, it's working perfectly for any variable type except for LONG VARCHAR, in this case it converts to String type in C# and if I add a watch the object variable I see some weird chars into it, so later when i try to insert/update another table (in another Database) it fails.
I know that even if the value would be ok in C# i can't insert that as it is, the document say i should bind the value to a variable to be able to insert into table, but i'm not sure how to do it since i'm creating the scripts in C# to be run in SqlBase, i'm not taking direct action from C#, even if a could i'm not being able to read the value correctly since it converts to string with weird digits into it, is this LOAG VARCHAR like a VARBINARY in Sql Server? I assume so because the column i have problems with is a LOGO, like a picture.
So in short, is it any way to
Read a long varchar from .NET and then..
..Use it when inserting / updating values to a table?
(1) is .NET but (2) is a sql script to be run on Sqlbase using SqlTalk.
Thanks!
Suggest you UNLOAD the Long data to a flat file using SQLTalk command. That way you'll get readable data. Read the flat file using C# if you need to, and do what ever you want with it, but to re-load the data into another table using SQLTalk, you need to use specific syntax. Go here: SQLBase Manuals ( all versions ) extract the manual appropriate to the version of SQLBase you are using, and 1) read up on UNLOAD from the 'SQLBase Language Reference' to get the Long data out into a flat file ( there are different snytax giving different results ). Then 2) read up on 'Examples of Bind Variables for Long data' from the 'SQLTalk Command Reference' , as you have to set LONG VARCHAR data to bind variables.
When inserting long data into a LONG VARCHAR or LONG NVARCHAR or LONG BINARY column, precede it with the $LONG keyword. You can then start entering data on the next line, and continue entering on successive lines. To mark the end of text, enter a double slash on a new line (//). e.g.
INSERT INTO BIO (NAME, BIO) VALUES (:1,:2)
\
SHAKESPEARE, $LONG
William Shakespeare was born in Stratford-on-Avon on
April 16, 1564. He was England's most famous poet and
dramatist. . . . .. . .
He died in 1616, leaving his second best bed to his wife.
//
If the data for the LONG (N)VARCHAR or LONG VARBINARY column comes from a file, enter the name of the file after the $LONG keyword. e.g.
INSERT INTO BIO (NAME, BIO) VALUES (:1,:2)
\
SHAKESPEARE, $LONG shakes.txt
JONSON,$LONG jonson.txt
O'NEILL,$LONG oneill.txt
/
To Update Long data e.g.
UPDATE TABLE EXPENSES SET COMMENTS = :1 WHERE DATE = :2
\
"Beltran Tree Service", 1/1/94 "Hercules", 1/2/94
"Checkup", 1/3/94
/

Bi-directional database syncing for Postgres and Mongodb

Let's say I have a local server running and I also have an exactly similar server already running on amazon.
Both server can CRUD data to its databases.
Note that the servers use both `postgres` and `mongodb`.
Now when no one is using the wifi (usually in the night), I would like to sync both postgres and mongodb databases so that all writes from each database on server to each database on local gets properly applied.
I don't want to use Multi-Master because:
MongoDB does not support this architecture itself, so perhaps I will need a complex alternative.
I want to control when and how much I sync both databases.
I do not want to use network bandwidth when others are using the internet.
So can anyone show me right direction.
Also, if you list some tools that solve my problem, it will be very helpful.
Thanks.
We have several drivers what would be able to help you with this process. I'm presuming some knowledge of software development and will showcase our ADO.NET Provider for MongoDB, which using the familiar-looking MongoDBConnection, MongoDBCommand, and MongoDBDataReader objects.
First, you'll want to create your connection string for connecting with you cloud MongoDB instance:
string connString = "Auth Database=test;Database=test;Password=test;Port=27117;Server=http://clouddbaddress;User=test;Flatten Objects=false";
You'll note that we have the Flatten Objects property set to false, this ensures that any JSON/BSON objects contained in the documents will be returned as raw JSON/BSON.
After you create the connection string, you can establish the connection and read data from the database. You'll want to store the returned data in some way that would let you access it easily for future use.
List<string> columns = new List<string>();
List<object> values;
List<List<object>> rows = new List<List<object>>();
using (MongoDBConnection conn = new MongoDBConnection(connString))
{
//create a WHERE clause that will limit the results to newly added documents
MongoDBCommand cmd = new MongoDBCommand("SELECT * FROM SomeTable WHERE ...", conn);
rdr = cmd.ExecuteReader();
results = 0;
while (rdr.Read())
{
values = new List<object>();
for (int i = 0; i < rdr.FieldCount; i++)
{
if (results == 0)
columns.Add(rdr.GetName(i));
values.Add(rdr.GetValue(i));
}
rows.Add(values);
results++;
}
}
After you've collected all of the data for each of the objects that you want to replicated, you can configure a new connection to your local MongoDB instance and build queries to insert the new documents.
connString = "Auth Database=testSync;Database=testSync;Password=testSync;Port=27117;Server=localhost;User=testSync;Flatten Objects=false";
using (MongoDBConnection conn = new MongoDBConnection(connString)) {
foreach (var row in rows) {
//code here to create comma-separated strings for the columns
// and values to be inserted in a SQL statement
String sqlInsert = "INSERT INTO backup_table (" + column_names + ") VALUES (" + column_values + ")";
MongoDBCommand cmd = new MongoDBCommand(sqlInsert, conn);
cmd.ExecuteQuery();
}
At this point, you'll have inserted all of the new documents. You could then change your filter (the WHERE clause at the beginning) to filter based on updated date/time and update their corresponding entries in the local MongoDB instance using the UPDATE command.
Things to look out for:
Be sure that you're properly filtering out new/updated entries.
Be sure that you're properly interpreting the type of variable so that you properly surround with quotes (or not) when entering the values in the SQL query.
We have a few drivers that might be useful to you. I demonstrated the ADO.NET Provider above, but we also have a driver for writing apps in Xamarin and a JDBC driver (for Java).

mysql import sql file without overwrite but update curent value + backup value

I have a 2 databases
1.db_temporary
2.db_primary
in db_temporary I have a table which contain bunch of data that I want to keep without overwrite it but update it from imported MYSQL file
I dump db_primary and import backup to db_temporary with this command
D:\mysql4.0.27\bin\mysqldump.exe --add-drop-table db_primary tb_wantomodify > "backupfile.sql"
D:\mysql4.0.27\bin\mysql.exe db_temporary < "backupfile.sql"
I have tried This Solution yeah it not overwrited , but what I want is update (addition) recent field of db_temporary with new value of backup.
Technicaly similiar to update set curvalue=curvalue+ 'newvaluefrombackup' like
Is it possible todo this?
Thank You
Firstly you can put both of those tables in the same database. There's no reason to create two seperate files. Secondly what you want here is the SQL UPDATE command. First create a database object and set it to your database.
SQLiteDatabase dataBase = SQLiteDatabase.openDatabase(myPath,
null, SQLiteDatabase.OPEN_READWRITE);
database.execSQL("UPDATE " +yourTableNameHere+ " SET " +theColumnYouWantToUpdate+ "='" +theNewValue+ "' WHERE " +theColumnNametoUpdate+ "='" +theNewValue+ "'");
This may seem confusing at first but what you need to understand is that SQL commands read as strings. This example assumes you're using String constants for your table data, as you should. The + sign before and after is a concatenation command. Make sure you add spaces. And don't forget the commas after the values you want checked. There's a pretty good SQL commands tutorial here: [http://www.1keydata.com/sql/sqlselect.html]

How to keep Powerbuilder from prepending table owner to table name (Postgres / PB 10.5)

I am connecting to a Postgresql database from Power Builder 10.5, using ODBC on Windows 7.. and I notice that PB prepends the table owner to the table name, eg if I am connected to the database as "user", it will format the query as "SELECT x, y, z FROM user.tablename".
This makes sense in Sybase, but does not work correctly in postgres, where schemas and users are a separate thing.
I tested by creating a postgres schema with the same name as the user, and then putting the tables within the schema. So, when PB used "username.tablename" postgres interpreted it as "schemaname.tablename" and this worked.. but it was just a test and not a useable solution.
It says in the docs that if the table owner is the same as the current user, PB will not prepend the owner, but if they don't match, it will. But in my test program, I see is the opposite: If UID is the same as the owner name, it DOES prepend, if they don't match, it doesn't.
Here's my connect code:
sqlca.DBMS = "ODBC"
sqlca.userid = "pblearn"
sqlca.dbpass = "pblearn"
string ls_DSN = "PBLEARN"
string ls_connect = "ConnectString='"
ls_connect += "DSN=" + ls_DSN + ";"
ls_connect += "UID=" + sqlca.userid + ";"
ls_connect += "PWD=" + sqlca.dbpass + "'"
sqlca.dbparm = ls_connect + ", SQLQualifiers=0"
connect;
My schemas are pblearn and public (default).. and two users "pblearn" and "pblearn2". If I connect with pblearn, prepend happens and I see the tables in pblearn (owner of the tables) schema, if I use pblearn2, the username is not prepended and I see the tables in public schema.
How can I get PB to either not prepend the username, or to prepend a consistent schema name regardless of user?
Thanks
In your database section of the PBODB105.INI used by your installation, add the following property :
PBTableOwner='NO'
From the documentation :
;PBTableOwner='NO' - do not qualify table names, default is 'YES'
EDIT :
If no sections exist for a particular connection then Powerbuilder
runs as an ODBC compliant client and extensions that might be
available cannot not be utilized. The search algorithm for the
entries is:
IF section and entry for are present current datasource
THEN use entry value
ELSE IF section corresponding to DBMS_Name Driver_Name exist
THEN use entry value if it exist
ELSE IF section corresponding to DBMS_Name exist
THEN use entry value if it exist
SECTION Headings
DataSource_Name (None are in ini file by default but if you need to override the more general setting of DBMS_Driver or DBMS_Name you would put in a data source specific section
DBMS_Name Driver_Name (Driver_Name is stripped of .dll extension)
DBMS_Name (DBMS name returned by the SQLGetInfo call)
So the easiest way to add a section for your Postgres installation is to make a section named with your current datasource name or if you prefer to use the DBMS_NAME then check this example : http://www.rgagnon.com/pbdetails/pb-0061.html to see the DBMS name returned by the ODBC driver.

Copying Access 97 tables to SQL Server 2008 R2 64-bit daily

I have an ancient system that uses an Access 97 database to store information. I want to copy the data from the 90-some tables to a SQL Server 2008 database on a daily basis. I already have the tables defined in SS2008.
There is an equally ancient DTS job that has a separate box-line-box for each table. I'd rather use an easier to maintain method that was written in code, not lines and boxes. (yes, I know that SSIS lines and boxes are translated into XML, but that's kind of hard for me read and write.)
I can't use Linked Server or OPENROWSET because my SS2008 server runs as a 64-bit process, so the OLEDB Jet driver is not available. The OLEDB MSOffice ACE 12.0 driver is 64-bit, but it isn't supposed to be used with database servers because it is not threadsafe (according to Microsoft). Also, I can't get it to work ("Could not find installable ISAM") within SS2008 despite extensive research. I can read the Access table with OLEDB Jet in a 32-bit program such as SSIS.
So, I'm looking for a modern, non-box-and-line, elegant 32-bit solution to copy the tables from the Access mdb/mdw file to SS2008.
Can I do this with:
a single T-SQL script
some C# thing that does introspection to determine table structure and then executes SQL for each table
some magic "copy every table from this OLEDB to that SQL Server" package
There are several close dups of this question (Copy access database to SQL server periodically, Migrating Access Tables to SQL Server - Beginner), but none that deal with the 32-bit limitation that makes OPENROWSET/Linked Server a non-option.
You could probably do it from within Access itself using VBA like the following:
Public Function CopyTableDataToSqlServer()
Dim tbd As DAO.TableDef, qdf As DAO.QueryDef, connStr As String
connStr = _
"ODBC;" & _
"Driver={SQL Server};" & _
"Server=.\SQLEXPRESS;" & _
"Database=cloneDB;" & _
"Trusted_Connection=yes;"
For Each tbd In CurrentDb.TableDefs
If Not ((tbd.Name Like "MSys*") Or (tbd.Name Like "~*")) Then
Debug.Print tbd.Name
Set qdf = CurrentDb.CreateQueryDef("")
qdf.Connect = connStr
qdf.SQL = "DELETE FROM [" & tbd.Name & "]"
qdf.ReturnsRecords = False
qdf.Execute
Set qdf = Nothing
CurrentDb.Execute _
"INSERT INTO [" & connStr & "].[" & tbd.Name & "] " & _
"SELECT * " & _
"FROM [" & tbd.Name & "] ", _
dbFailOnError
End If
Next
Set tbd = Nothing
Debug.Print "Done."
End Function
I cast my vote for some C# thing. You may need to watch out for memory usage if you have large tables.
The basic idea goes like this:
foreach(tableName in access)
get the table from access
optionally clear the target table
sqlbulkcopy it to target database
A more complicated solution would be to grab both tables and only update the changed rows.