FAILARCHPATH not working for Db2 database - db2

Logarchmeth and failarchpath configured to different local disk paths and I made the Logarchmeth path unavailable (removing the privileges)
Failarchpath path was still not used at any point (all the logs were kept in the log path), but the database was up and running for the whole time
Once the Logarchmeth path got available again, the backup restarted.
Don't understand why logs were not backed up to the location pointed to by failarchmeth setting.

Related

Rescue data from mongodb instance

recently I've got some problems with my old hard drive and windows so I switched over to a new one. I can still access any files on the old hard drive and I still have an installation of mongodb on there with some pretty important data (should have done a backup sometime).
What would be the smartest way to get this data and transfer it to my new instance? Just copying the data files over? This "old" instance is btw not running and its not possible for me to start it again.
You could get the "old" instance running again by running mongo on the system and pointing the DBPath to the folder on the old hard drive.
It looks like copying the files over is a valid option though.
See https://docs.mongodb.com/manual/core/backups/#back-up-with-cp-or-rsync

SqlBase Database on One Drive

I have a database on "Microsoft OneDrive", I have 4 valid licenses from Gupta 4 SqlBase. When I try to run from PC 1 I can access the database, but when I try the same from PC 2 I got this
Reason: Attempting to open an existing file and a failure has occurred.
Remedy: Determine and correct the cause of the open file failure.
Verify that the specified file exists. Verify the number of
files allowed open for the operating system permits the
additional file, that is, check the FILES= configuration
parameter setting.
I assume this is related to the LOG files on the database and some settings in the Sql.Ini, but I'm not able to find where/how???
The intention is to run the database on "OneDrive", buy SqlBase licenses and run a multi user system. The application has been made as such.
Where do I think wrong?
Where do I do wrong?
What setting are missing?
Thanks
That won't work.
SqlBase (and all other RDBMS) are built to manage one databasefile + logfiles.
When multiple instances work with more or less replicated datafiles this ends up in a clash.
There are systems in the world which can work as a distributed cluster (e.g. like the document-database RavenDB) but they are built to work like this (not with OneDrive of course but with their own replication mechanism). Sqlbase is not.

LocalDeployer: app working directory

I have an app that creates a file temporarily, does not delete it. I was hopping to see the contents of the file while running.
The app is deployed using the local deployer, does any body knows where would it create the file??
I tried the temp path, and also the working directory where the out and error logs are... nothing, the app does seem to be erroring, that would be on my normal console log.
Running on unix, temp is at /tmp.
thanks
You can control this location via the local deployer property workingDirectoriesRoot and deleteFilesOnExit.
For more information, you can refer this doc:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#configuration-deployer
Actually looking at the code of the local deployer, it seems the location it defaults to is the system temp path (System.getProperty(“java.io.tmpdir”)) and adds the stream id, plus the app id, etc. It is the same folder where the console and error streams write to.
thanks!

sshfs -o follow_symlinks mounts with broken softlinks

Up until a day ago I was perfectly able to mount a drive via sshfs with the follow_symlinks option given.
In fact I set up an alias for this command and used it for several weeks.
Since yesterday, using the same alias the volume still mounts correctly but the all the soft symlinks within that volume are broken.
Logging in using a regular ssh session confirms the symlinks actually are functioning.
Are there any configuration files that may interfere with what I try to do?
I was modifying /etc/ssh/ssh_config and /etc/hosts because I experienced severe login delays when starting an ssh session from a friend's place. But I reverted any changes later on.
Could a wrong configuration in these files cause my issue?
Btw. I'm using Ubuntu 16.04
It turns out that the permissions on the particular machine I was trying to mount the folder from changed over the weekend.
It is now only allowing access to certain folders from within the same network. That is why my soft-links (pointing to now permission restricted content) seemed broken when mounting from my home network.

running an exe file from a sql job

i'm trying to run an exe from a sql job.
the db is on the server, as well as the exe file.
the exe is supposed to write stuff on a log.
even though the sql job is successful, i see no change on the log file.
i've checked the exe locally, and it does work.
The job runs on type cmdexec, and the command is :
\\ustlvint02\c\FixProjectsWhichFailedSync\FixProjectsWhichFailedSync.exe
ustlvint02 - the server's name.
the path is valid, since i tested it by running it from my computer (and there, the log isn't created as well).
i'll appreciate any help you can offer.
Hadas
The account that SQL Server Agent runs on needs to have permissions to 1.) un the exe in that location and 2.) write to the log file location.
Find out account is used by SQL Agent, then verify that this user has the proper execute and write permissions.
Look for the log file in %WINDIR%\System32 (for 32-bit version of SQL Server) or in %WINDIR%\SysWOW64 (for 64-bit version of SQL Server), where %WINDIR% is a path to the folder where Windows is installed (typically, C:\Windows). This destination does not depend on the system account specified for the SQL Agent job. All files which your executable needs to write to or read from must be either specified within an absolute path or be specified within a relative path and thus be present in the aforementioned system folder.