I have a database on "Microsoft OneDrive", I have 4 valid licenses from Gupta 4 SqlBase. When I try to run from PC 1 I can access the database, but when I try the same from PC 2 I got this
Reason: Attempting to open an existing file and a failure has occurred.
Remedy: Determine and correct the cause of the open file failure.
Verify that the specified file exists. Verify the number of
files allowed open for the operating system permits the
additional file, that is, check the FILES= configuration
parameter setting.
I assume this is related to the LOG files on the database and some settings in the Sql.Ini, but I'm not able to find where/how???
The intention is to run the database on "OneDrive", buy SqlBase licenses and run a multi user system. The application has been made as such.
Where do I think wrong?
Where do I do wrong?
What setting are missing?
Thanks
That won't work.
SqlBase (and all other RDBMS) are built to manage one databasefile + logfiles.
When multiple instances work with more or less replicated datafiles this ends up in a clash.
There are systems in the world which can work as a distributed cluster (e.g. like the document-database RavenDB) but they are built to work like this (not with OneDrive of course but with their own replication mechanism). Sqlbase is not.
Related
I had to change the name of the windows 7 system. Unaccountably, vncserver is still using the old computer name. This was RealVnc free version. I re-installed but it is still using the old computer name.
I had a Z400 motherboard go bad and it took the disk drive. I replaced the motherboard, $39 was cheap, and I cloned one of my other Z400 workstation C drives using Acronis. I booted the replacement motherboard with the cloned copy, changed its name to the old defective one, and activated windows. When It rebooted, vncserver still had the old computer name and I cannot get rid of it and it is conflicting with the vncserver on the other Z400 since they both use the same name. There is no option in the server to use a different name that I can find anywhere.
IPs are different and all system behave fine. I can ping and even access shares using their names. The problem system clearly shows the correct name but, unaccountably, vncserver is using the wrong name.
This system will be upgraded to 10 in a few days, maybe the problem will go away when that happens.
Solved! First I had to log in and reduce the number of clients to under 5 as that was my limit. Then I had to remove the problem system's leftover name. This was all done at realvnc web site. Once under 5 systems then I could add the problem one and once it connected to realvnc's cloud then it got the correct name. This was an artifact of having more then 5 systems when I was only licensed for 5. The "6th" one worked locally as its setup was still valid, but it was refused connection to the cloud so it never got to change its name. It "worked" until I did a flushdns and its old setup was no longer valid.
I want to use Postgresql in Windows Server 2012 R2 for one our project where it can be 24/7 uptime.
I would like to ask the community if I can have 2 master instances in 2 different servers A&B and they will 'work' on the same DB located in a shared file storage in lan. Always one master instance on server A will be online and when it goes offline for some reason (I suppose) a powershell script will recognize that the postgresql service stopped and will start the service in server B. The same script will continuous check that only one service in servers A & B is working to avoid conflicts.
I'd like to ask if this is possible or a better approach for my configuration.
(I can't use replication because when server A shuts down the server B is in read-only mode thing that I don't want)
If you manage to start two instances of PostgreSQL on the same data directory, serious data corruption will happen.
Normally there is a postmaster.pid file that prevents that, but a PostgreSQL server process on a different machine that accesses the same file system will happily unlink that after spewing some log messages, thinking it was left behind from a crash.
So you are really walking on thin ice with a solution like that.
One other issue that you didn't think of is that script that is supposed to check if the server is still running. What if that script fails, because for example the network connection between the two servers is down, but the server is still up an running happily? Such a “split brain” scenario will cause data corruption with your setup.
Another word of caution: since you seem to be using Windows (Powershell?), you probably envision a CIFS file system when you are talking of shared storage. A Windows “network share” is not a reliable file system — last time I checked, it did not honor _commit.
Creating a reliable failover cluster is harder than you think, and I'd recommend that you check existing solutions before you try to roll your own.
I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.
The context is OLAP cube development. After configuring my project though SQL Server Data Tools (SSDT, the new BIDS) I am unable to deploy the project.
Every time the deployment process is started I get an error like the one below:
File system error: The following error occurred while opening the file '\\?\D:\[...]\database\mssql\tmpdb\MDTempStore_1864_9_no8wd.tmp': Access is denied.
(The [...] denotes some part of the path I ommited for brievty)
I always get the same error, indicating that some .tmp file could not be accessed.
My environment:
OS: Windows Server 2008 R2 Standard, SP1
SQL Server: SQL Server 2012 (v11.0.2100.60), running on localhost
What I tried:
I have the File System access rights for the folder in question (at some point I even tried with Admin privileges on the machine, didn't help)
I tried to deactivate the anti-virus in case it was performing on-access-scan (still didn't help)
Attempts to deploy/process individual dimensions causes the same problem
Deploying dimensions or cubes programmatically through SMO (instead of SSDT) runs into the same problem
Deploying DataSource objects as well as DataSourceView objects works fine
Maybe some of you faced similiar issues or have further suggestions/ideas?
Thanks for you help!
So, I finally figured it out.
As expected it was a permission issue, but despite the error message hinting at some missing file system permissions, the cause of the problem was the user I configured the Data Source with.
The SQL User I specified was given the roles
db_datareader
db_datawriter
db_ddladmin
on the source database but this doesn't seem to be enough. When I tried to give him the server role sysadmin it started working.
This is probably overkill, one could further fine-tune the role assignment but for now it also works that way.
Just a suggestion here - have you tried running SSDT as an administrator? That is, right-click on SSDT and click Run As Administrator. Then try to deploy your project. It definitely sounds like a permissions issue.
Exact reason is SSAS Service user does not have an access to the folders that are specified in SSAS configuration (i.e error states it is Temp Folder). I think it is not directly related with SQL Server because it is just a file access error. Error is thrown before it reaches SQL Server.
Give full permission to SSAS Service User for those folders.
Regards
Onur
I came across this error that is apparently pretty common among Linux Systems.
"Too many files Open"
In my code I tried to set the Python open file limit to unlimited and it threw an error saying that I could not exceed the system limit.
import resource
try:
resource.setrlimit(resource.RLIMIT_NOFILE, (500,-1))
except Exception as err:
print err
pass
So...I Googled around a bit and followed this tutorial.
However, I set everything to 9999999 which I thought would be as close to unlimited as I could get. Now I cannot open a session as root on that machine. I can't login as root at all and am pretty much stuck. What can I do to get this machine working again? I need to be able to login as root! I am running Centos 6 and it's as up to date as possible.
Did you try turning it off and on?
If this doesn't help you can supply init=/bin/bash as kernel boot parameter to enter a root shell. Or boot from a live cd and revert your changes.
After performing an 'strace su -', I looked for the 'No such file or directory' error. When comparing the output, I found that some of those errors are ok, however, there were other files missing on my problem system that existed on a comparison system. Ultimately, it led me to a faulty line in /etc/pam.d/system-auth-ac referencing an invalid shared object.
So, my recommendation is to go through your /etc/pam.d config files and validate the existence of the shared object libraries, or, look in /var/log/secure and it should give some clue to missing shared objects as well.