How to know if a Firebird 2.0 database is being accessed? - firebird

I know that using Firebird 2.5+ I can check if there are users accessing my database using SQL, but unfortunately, Firebird 2.0 doesn't have this feature. Yes, I know it's an old version, but it's a legacy software and I'm not allowed to upgrade this in a short time... :(
I need to know if someone is connected to my 2.0 Firebird database, due to a process I'll run:
Block connections to DB (but ONLY if no one is connected)
Run my process
Allow users to reconnect again
I can start my process only when there are no users connected.
My database is part of a client/server system (no Web).
Any hints?

-at[tach] : this parameter prevents any new connections to the database from being made with the exception of the SYSDBA and the database owner. The shutdown will fail if there are any sessions connected after the timeout period has expired. It makes no difference if those connected sessions belong to the SYSDBA, the database owner or any other user. Any connections remaining will terminate the shutdown with the following details:
https://firebirdsql.org/manual/gfix-dbstartstop.html
There is also Services API to do it so your database access library should expose the shutdown function. Specify a short shutdown, and if it failed - then there were some users. If it succeeded - now you can go on with maintenance, having a warranty client applications will not be able to connect.
Alternatively you can upgrade Firebird 2.0 -> 2.1 which is more close to 2.0 than 2.5 but already have Monitoring Tables implemented.
However this your approach has one weak point - race conditions. Using M.T. you envision your work as following:
Keep querying M.T. (which slows down server work significantly) until there are no other connections.
start maintenance work, that would fail if other connections are active
complete maintenance work
Problem is, that even after at step 1 you gained "no other connection" state, it does not mean that between steps 1 and 2, and especially between steps 2 and 3 now new connections would be made.
Even if you made your checks and ensure #1 condition, when you would go on with maintenance there would be some new user connected back and working now. Not every time of course, but as time goes by it will eventually happen one day.
But there is yet one more good thing in FB 2.1 - database-level triggers.
c:\Program Files\Firebird\Firebird_2_1\doc\sql.extensions\README.db_triggers.txt
You can create a regular "all_current_connections" table, using on connect and on disconnect triggers to keep it up to date.
You perhaps would also have to add some logic to your applications, so they would update that table with your internal application ID, to tell main workflow apps/connections from servicing utilities. However it is also possible that mere CURRENT_USER and CURRENT_CONNECTION pair, which the trigger knows and can store to the table, would be enough for that table, if you can infer kind of application from mere user name.
Then on disconnect trigger might be checking whether all "main workflow" apps disconnected and POST_EVENT to notify servicing utilities. However those utilities would still have to shutdown the database first, anyway.

You can shut down the database using gfix. The gfix tool will try to shutdown the database and if connections still exist after a timeout, the shutdown will fail.
For example, use:
gfix -shut -attach 5 <your-database>
This will:
prevent new connection being created,
wait 5 seconds for the existing connections to end,
if after 5 seconds there are still active connections the shutdown will abort,
otherwise, after 5 seconds the database will be shut down.
After shutdown, only SYSDBA or the database owner can create a connection to the database. This is only a viable option if your application it self doesn't use SYSDBA or the database owner account.
You bring the database back online using:
gfix -online <your-database>
For more information, see also Gfix - Database Housekeeping: Database Startup and Shutdown

Well, not an elegant way, but works...
I try to rename the database file.
If there is someone accessing the database, the rename operation will give me
an exception, saying that the file is in use by some process.
If rename succeeds, new users will not be able to access the database
anymore (the connection string used by my systems is not changed).
I run the exclusive process I have to.
Rename the database file to its original name, allowing new users to
connect again.
I post my solution in the hope that helps someone facing a similar problem.
Our new version of the product will probably a Web application and the database was not choosen yet, but certainly will no be Firebird.
Thanks to all that tried to give me an answer.

Related

Postgres db constantly flooded with connections, mysterious roles -- hacked?

I wrote a simple finance tracking application in 2017 that uses a Heroku backend with a Postgres db. The application suddenly stopped working, and I traced the problem to the database.
I was unable to connect to the database, seeing this error:
psql: FATAL: too many connections for role
I thought maybe the app had a connection leak so I shut the frontend down (Im the only one that uses it) and reset all the db connections. I was then able to login to the db, and noticed all these strange roles (hundreds?) that I dont recognize.
When I logged out of psql, I tried logging back in and again was denied with the "too many connections" error. The only way I can log back in is if I kill all connections again and immediately login. If I wait 2-3 minutes the error comes back. I don't think my heroku app is establishing all these connections with the db, because Im tailing the logs and it's just sitting there.
Does anyone have any theories about what might be going on here? Have I been hacked maybe? How would you debug this further, and how might I fix the problem?
Thanks!
The database has obviously been hacked.
Shut it down and delete it right away.
Restore to a different cluster from a known good backup.
From now on, choose good passwords and use a restrictive pg_hba.conf that for example doesn't allow remote access for superusers.
Never, ever, operate your application with a superuser.
Examine your application for SQL injection vulnerabilities.
this may be because of a bot(made by hackers) that is scanning the internet and trying CVE exploit (N-day exploit) to see if it is vulnerable, and then launching that type of attack or it may be because someone on the VNAT with you trying to something weird, but one thing for sure it is a bot because you can not launch that many connections by hand.

Wildfly won't deploy when datasource is unavailable

I am using wildfly-8.2.0.Final.
There are several databases that i have to connect to. However, some of them are only used for certain functionalities on the web application and they are not needed to be online all the time. So when the wildfly starts, some of the datasources may not be online. However, disconnection to any datasource causes wildfly to not deploy .war deployment and i cannot find any way to solve this problem. Is there a way?
UPDATE:
I have a single table on a remote database server. The user will be able to query the table via my web application. The thing is, I have almost no control over the mentioned database. When the web application starts, it could be offline. However, this would cause my web application to fail to start. What I want is being able to run queries on a remote database if it is online. If it is offline, the web page could fail or the query can be canceled. But the only thing that I don’t want is that my web application to be limited by a remote database that I may have no control over.
My previous solution was a workaround. I would run queries on the remote database via a local database which has a foreign table to the remote one. However, the local one reads all data on the remote table before applying any constraints on postgresql 9.5. As the remote table has a large number of rows and I am using lazy loading, it takes so long for a single query and defeats the whole purpose of the lazy loading.
I found a similar question, but there is no answer.
On wildfly, you can set the datasource so that it tries to reconnect periodically when it disconnects. In my case, the deployment should be successful initially for this to be helpful.
The deployment will failed if it references those datasources.
Also you could define but disable those datasources.

MongoDB: How to prevent user from reading data even if he gets hold of database

I am working on Mongodb authorization.
I added users and am using mongod --auth while connecting to the database so that only authorized users are able to see the database.
Right now, mongo db can only be able to access throught vpn.
Suppose if a hacker breaks into the server machine, he can close the existing mongod connection(which was running with security using --auth) and can start a new connection without authentication mode after which he can see all the data of the database.
How can we secure database so that everytime it asks for the username/password to be provided.
Or some other ways to prevent this.
Thanks.
If he breaks into the server machine, he won't restart mongo. He would simply copy the mongo database and open it on his own machine, without using mongo at all.
If the attacker has the control of a server running process P1, P2, ... each Pi has to be considered breached, including theirs data.
The exception is strong isolation (i.e. virtual machines) and crypto; if the application crypts all its data with a key whose generation is not fully automated (i.e. a passphrase to be inserted on the startup, a challenge/response the administrator needs to pass during the boot, etc ...) this may prevent the attacker from getting all the bits to decrypt it. Otherwise, if the application is able to encrypt and decrypt without any human help, the attacker is able to do it as well.
Those things do not apply to mongo, that does not have support for stuff like that. Good old SQLs have it but they are not trendy any more ;)
On the specific user: are you afraid they will break into as mongodb or as another user? Because if they get the user foo, they still may have problems in accessing mongodb (data or process) if local permissions are well set. But again, people tend to consider the local privilege escalation (i.e. moving from foo to root and then to mongodb) something that happens when someone breaches. In roughly 100 pentests I managed to get access to a machine, probably just once or twice I could not escalate.

SQL Server 2008 Service Broker tutorial -- cannot receive the message (exception in transmission_status)

I am learning how to use the Service Broker of SQL Server 2008 R2. When following the tutorial Completing a Conversation in a Single Database. Following the Lesson 1, I have successfully created the message types, contract, the queues and services. Following the Lesson 2, I have probably sent the message. However, when trying to receive the message, I get the NULL for the ReceivedRequestMsg instead of the sent content.
When looking at the sys.transmission_queue, the transmission_status for the message says:
An exception occurred while enqueueing a message in the target queue. Error: 15517, State: 1. Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.
I have installed SQL Server using the Windows login like Mycomp\Petr. I am using that login also for the lessons.
Can you guess what is the problem? What should I check and or set to make it working?
Edited 2012/07/16: For helping to reproduce the problem, here is what I did. Can you reproduce the error if you follow the next steps?
Firstly, I am using Windows 7 Enterprise SP1, and Microsoft SQL Server 2008 R2, Developer Edition, 64-bit (ver. 10.50.2500.0, Root Directory located at C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL_PRIKRYL05\MSSQL).
Following the tutorial advice, I have downloaded the AdventureWorks2008R2_Data.mdf sample database, and copied it into C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQL_PRIKRYL05\MSSQL\DATA\AdventureWorks2008R2_Data.mdf
The SQL Server Management Studio had to be launched "As Administrator" to be able to attach the data later. Then I connected the SQL Server.
Right click on Databases, context menu Attach..., button Add..., pointed to AdventureWorks2008R2_Data.mdf + OK. Then selected the AdventureWorks2008R2_Log.ldf from the grid below (reported as Not found) and pressed the Remove... button. After pressing OK, the database was attached and the AdventureWorks2008R2_log.LDF was created automatically.
The following queries were used for looking at "Service Broker enabled/disabled", and for enabling (the Service Broker was enabled successfully for the database):
USE master;
GO
SELECT name, is_broker_enabled FROM sys.databases;
GO
ALTER DATABASE AdventureWorks2008R2
SET ENABLE_BROKER
WITH ROLLBACK IMMEDIATE;
GO
SELECT name, is_broker_enabled FROM sys.databases;
GO
Then, following the tutorial, the queries below were executed to create the message types, the contract, the queues, and the services:
USE AdventureWorks2008R2;
GO
CREATE MESSAGE TYPE
[//AWDB/1DBSample/RequestMessage]
VALIDATION = WELL_FORMED_XML;
CREATE MESSAGE TYPE
[//AWDB/1DBSample/ReplyMessage]
VALIDATION = WELL_FORMED_XML;
GO
CREATE CONTRACT [//AWDB/1DBSample/SampleContract]
([//AWDB/1DBSample/RequestMessage]
SENT BY INITIATOR,
[//AWDB/1DBSample/ReplyMessage]
SENT BY TARGET
);
GO
CREATE QUEUE TargetQueue1DB;
CREATE SERVICE
[//AWDB/1DBSample/TargetService]
ON QUEUE TargetQueue1DB
([//AWDB/1DBSample/SampleContract]);
GO
CREATE QUEUE InitiatorQueue1DB;
CREATE SERVICE
[//AWDB/1DBSample/InitiatorService]
ON QUEUE InitiatorQueue1DB;
GO
So far, so good.
Then the following queries are used to look at the queues (now empty when used):
USE AdventureWorks2008R2;
GO
SELECT * FROM InitiatorQueue1DB WITH (NOLOCK);
SELECT * FROM TargetQueue1DB WITH (NOLOCK);
SELECT * FROM sys.transmission_queue;
GO
The problem manifests when the message is sent:
BEGIN TRANSACTION;
BEGIN DIALOG #InitDlgHandle
FROM SERVICE
[//AWDB/1DBSample/InitiatorService]
TO SERVICE
N'//AWDB/1DBSample/TargetService'
ON CONTRACT
[//AWDB/1DBSample/SampleContract]
WITH
ENCRYPTION = OFF;
SELECT #RequestMsg =
N'<RequestMsg>Message for Target service.</RequestMsg>';
SEND ON CONVERSATION #InitDlgHandle
MESSAGE TYPE
[//AWDB/1DBSample/RequestMessage]
(#RequestMsg);
SELECT #RequestMsg AS SentRequestMsg;
COMMIT TRANSACTION;
GO
When looking at the queues, the Initiator... and the Target... queues are empty, and the sent message can be found in sys.transmission_queue with the above mentioned error reported via the transmission_status.
alter authorization on database::[<your_SSB_DB>] to [sa];
The EXECUTE AS infrastructure requires dbo to map to a valid login. Service Broker uses the EXECUTE AS infrastructure to deliver the messages. A typical scenario that runs into this problem is a corporate laptop when working from home. You log in to the laptop using cached credentials, and you log in into the SQL using the same Windows cached credentials. You issue a CREATE DATABASE and the dbo gets mapped to your corporate domain account. However, the EXECUTE AS infrastructre cannot use the Windows cached accounts, it requires direct connectivity to the Active Directory. The maddening part is that things work fine the next day at office (your laptop is again in the corp network and can access to AD...). You go home in the evening, continue with Lesson 3... and all of the sudden it doesn't work anymore. Make the whole thing seem flimsy and unreliable. Is just the fact that AD conectivity is needed...
Another scenatio that leads to the same problem is caused by the fact that databases reteint the SID of their creator (the Windows login that issues the CREATE DATABASE) when restored or attached. If you used a local account PC1\Fred when you create the DB and then copy/attach the database to PC2, the account is invalid on PC2 (it is scoped to PC1, of course). Again, not much is affected but EXECUTE AS is, and this causes Service Broker to give the error you see.
And last example is when the DB is created by a user that later leaves the company and the AD account gets deleted. Seems like revenge from his part, but he's innocent. The production DB just stops working, simply because it's his SID that the dbo maps too. Fun...
By simply changing the dbo to sa login you fix this whole EXECUTE AS thing and all the moving parts that depend on it (and SSB is probably the biggest dependency) start working.
You would need to grant receive on your target queue to your login. And it should work!
USE [YourDatabase]
GRANT RECEIVE ON [dbo].[YourTargetQueue]
TO [Mycomp\Petr];
GO
And you also need to grant send for your user, permission on Target Service should be sufficient, but let's enable on both services for the future.
USE AdventureWorks2008R2 ;
GO
GRANT SEND ON SERVICE::[//AWDB/1DBSample/InitiatorService]
TO [Mycomp\Petr] ;
GO
GRANT SEND ON SERVICE::[//AWDB/1DBSample/TargetService]
TO [Mycomp\Petr] ;
GO

Database migrations: manage with build script or automatic on app startup?

I'm in the process of developing a deployment system for a new web app and I'm wondering where the best point in the process to manage database migrations is (the question of how to do the migrations is another problem entirely).
It seems there are two ways to go:
Use a migration script that can
either be run manually from command
line or as part of the automatic
deployment/build process
Run the migrations when the app
starts up (I'm using ASP.NET so this
can be done easily enough without
causing a long-running user request)
Does anyone have any suggestions/insight/experience with these approaches? Any other suggestions?
I can see why #1 might be more attractive - it gives me complete control over when the DB is updated. However, I quite like #2 as it allows me to quickly iterate between deployments and reduces the manual process. #2 could also be used on my development machine to allow even quicker iterations. Hmm, starting to think having both might be a good thing...
We have a sales-force system with ~100 client and we are updating database at application startup (True, our is a desktop application.) I like this approach, it's safe and iterative if we have indeterministic startpoint (is the client database new or only updated to verison x.y.z?).
But at serverside I'm preferr your #1 option: we create a SQL query file on our virtual machine (based on the copy of the original database) and runs this query against the real server.
So IMHO:
Disconnected clients: startup, iterative scripts
Server: query created on VM based on the actual and real database
So I'm interrested in this problem too, and find some (half)frameworks as RikMigrations. After some googling there is a good startplace about DB versioning/migration frameworks: .NET Database Migration Tool Roundup. Not neccessarely the documentation but the team blogs can be interresting.
I like option #1 better as it seems much more flexible. In lieu of actually performing migrations on each app start, I think I would verify that the database schema (version number?) matches the code, and if not, throw a warning or error about a mismatched database schema.
I'd prefer option #1 for a number of reasons. First, integration tests usually require your DB schema to be up-to-date, and launching a web-site to upgrade the schema will be a huge timewaster. Second, you cannot change database schema while your site is running (say, add a couple of indexes to speed things up).
As for production side of things, upgrading your database in transaction MSI-style installation is much better than attempting to upgrade at each app startup since you can potentially end up with desynchronized database-application versions.
And if you're looking for the migration framework, take a look at Wizardby.
If the application ever has to run on a customer's machine than migrating at startup can prevent a lot of support calls - assuming you can do seamless migration without user intervention (I hope you aren't normally running your web app with permission to modify the database).
If the application always runs under your control automatic migration is less of an issue - but still can be a good feature, especially if you want to minimize downtime and manual deployment steps.