I am working at something where i have to create a shell script to sync databases between source and target. the target is always localhost:<db_name> (db_name can change). The source db can change.
I am using pgsync to the sync between target and source. The way i am structuring my script is to start a shell script which gives option which db to sync. based on selection, build the source url and pass it to pgsync for from attribute.
I have been trying to find pgsync documentation and couldn't find a way how can i pass it the from value. is it possible
Related
Hi i've been trying to execute a custom activity in ADF which receives csv file from the container (A) after further transformation on the data set, transformed DF stored into another csv file in a same container (A).
I've written the transformation logic in python and have it stored in the same container (A).
Error raises here, when i execute the pipeline it returns an error *can't find the specified file *
Nothing wrong in the connections, Is anything wrong in batch Account or pools!!
can anyone tell me where to place the python script..!!!
Install azure batch explorer and make sure to choose proper configuration for virtual machine (dsvm-windows) which will ensure python is already in place in the virtual machine where your code is being run.
This video explains the steps
https://youtu.be/_3_eiHX3RKE
I have a large Mongo database (5M documents). I edit the database from an offline application, so I store the database on my local computer. However, I want to be able to maintain an online copy of the database, so that my website can access it.
How can I update the online copy regularly, without having to upload multiple GBs of data every time?
Is there some way to "track changes" and upload only the diff, like in Git?
Following up on my comment:
Can't you store the commands you used on your offline db, and then
apply them on the online db, through a script running on SSH for
instance ? Or even better upload a file with all the commands you ran
on your offline base, to your server and then execute them with a cron
job, or a bash script ? (The only requirement would be for your bases
to have the same start point, and same state, when you execute the
script)
I would recommend to store all the queries you execute on your offline base, to do this you have many options, I can think about the following : You can set the profiling level to log all your queries.
(Here is a more detailed thread on the matter: MongoDB logging all queries)
Then you would have to extract then somehow (grep ?), or store them directly in another file on the fly, when they are executed.
For the uploading of the script, it depends on what you would like to use, but i suppose you would need to do it during low usage hours, and you could automate the task with a CRON job, and an SSH tunnel.
I guess it all depends on your constraints (security, downtime, etc..)
I am facing a problem with mongo DB connection.
I have succefully imported tMongo components it to my Talend Open Studio 5.1.1 and by copying the mongo 1.3.jar file to lib/java folder, my Mongo DB jobs are running successfully, but the problem is even if I provide some fake server path(IP) and fake port for mongoDB, my job is running without an error and it is giving me 1 row with no data. and same goes with right IP and port.
How do I resolve it.
I think the connection is not working. As you must be knowing, mongoDB checks that the connection is actually working or not when you perform a query on it.
(Yeah, it doesn't check for a successful connection when you just connect to it ).
I would suggest to instead add the mongoDB components present in Talend for Big Data by following the steps below:
Components provided for MongoDB are :
tMongoDBInput, tMongoDBOutput, tMongoDBConnection etc.
Or you can Download the components from http://www.talendforge.org/exchange/ and search for Mongo instead of using Talend Big Data. But I would suggest use Talend for big Data for it.
The components will be zipped format , Unzip the same. In Talend Big data you will find the components in Component folder.
Copy these Unzipped Components to the installation Path of TOS.
C:TalendTOS_DI-Win32-r84309V5.1.1pluginsorg.talend.designer.components.localprovider_5.1.1.r84309components
Copy the mongo-1.3.jar file in the component folder into the C:TalendTOS_DI-Win32-r84309-V5.1.1libjava
In many systems you might not be able to see this file then go with ADMINISTRATOR priviliges.
optional for few systems——>>> Inside index.xml add
save index.xml
Restart TOS
Then you will be able to use them as normal components.
Cheers!
The reason for the Job running without any error could be due to the connection / meta-data you have used for the Mongo Connector. It doesn't is not possible for the job to run without any error even after giving fakepath.
I guess you might configured (re-modified) the repository connection but using a built-in meta data for component.
I've wrote the code that creates full backups of my ESENT database, using JetBeginExternalBackup API.
Following the MSDN guidelines, I backed up every file returned by JetGetAttachInfo and JetGetLogInfo.
I've made the backup, erased old database, and copied the backup data to the database folder.
The DB engine was unable to start, the JetInit error code is "JET_errMissingLogFile".
I've checked the backup, it only contains the database file, and "<inst>XXXXX.log" log files. It lacks the current log file (I'm using circular logging, BTW).
Is there any way to restore such backup?
I don't want to use JetExternalRestore API because it's too complex: I don't need to restore to another location, I don't understand why there're 3 input folders not 2, and I don't know the values to supply in genLow and genHigh arguments.
I do need external backups: the ESENT database is used by ASP.NET on a remote server, and I'm backing it up over the Internet.
Or, maybe there's a way to retrieve the name of the current log file, and I should just add it to the backup?
Thanks in advance!
P.S. I've got no permissions to span processes on my web server, so using eseutil.exe is not an option.
Unpack all backed up files to a single folder.
Take the name of your main database file. Replace extension to .pat. Create zero-length file with that name, e.g. database.pat.
After this simple step, call JetRestoreInstance API, it will restore the backup from that folder.
The MS source server technology uses an initialization file named srcsrv.ini. One of the values identifies the source server location(s), e.g.,
MYSERVER=\\machine\foobar
The docs leave much unanswered about this value. To start with, I haven't been able to find the significance of the value name, i.e., what's on the left side--and I don't see it used anywhere else. Hewardt & Pravat in Advanced Windows Debugging say "The left side ... represents the project name", but that doesn't seem to jibe with MS's "MYSERVER" example.
What is the significance of the left side? Where else is it used? Does the value reference a server or a project, and is there one per server, or one per project?
For anyone looking into this in the future, I received the following information from MS:
The name on the left side is the logical name of a version
control server. The name is also used in the source-indexed symbol files
(pdb). For example, a symbol file may contain this string value: MYSERVER=mymachine1.sys-mygroup.corp.microsoft.com:2003and the source files are referenced like this in pdb: *MYSERVER*/base/myfolder/mycode.cWhen SrcSrv starts, it looks at Srcsrv.ini for values; these values override the information contained in the .pdb file: "MYSERVER=mymachine.sys-mygroup.corp.microsoft.com:1666" overrides
"MYSERVER=mymachine1.sys-mygroup.corp.microsoft.com:2003"This enables users to configure a debugger to use an alternative source control server at debug time. The info is documented at http://msdn.microsoft.com/en-us/library/ms680641.aspx.
So, it is a logical name for a source server, and its value can be changed at debug time to reference a different server than the one originally used when the PDBs were created.
The way the debugger retrieves your source is by srcsrv using some command line utility. The utility program itself and the command line used varies depending on which type of repository hosts your code. One of the issues preventing retrieval is that when that command line program is invoked it fails.
To find out why use the command !sym noisy in WinDBG. It is mostly helpful in diagnosing symbol server issues but for source indexed PDB it also will show the actual command line WinDBG used. Copy the command from the command log window and run it in CMD.EXE to get more details on the failure.