Reshift newbie here - greetings!
I am trying to unload data to S3 from Redshift, using a java program running locally which issues an UNLOAD statement over a JDBC connection. At some point the JDBC connection appears lost on my end (exception caught).
However, looking at the S3 location, it seems that the unload runs to completion. It is true however that I am unloading a rather small set of data.
So my question is, in principle, how is unload supposed to behave in case of a lost connection (say, a firewall kills it or even someone does a kill -9 on the process that executes the unload)? Will it run to completion? Will it stop as soon as it senses that the connection is lost? I have been unable to find the answer neither by rtfm'ing, nor by googling...
Thank you!
The UNLOAD will run until it completes, is cancelled, or encounters an error. Loss of the issuing connection is not interpreted as a cancel.
The statement can be cancelled on a separate connection using CANCEL or PG_CANCEL_BACKEND.
http://docs.aws.amazon.com/redshift/latest/dg/r_CANCEL.html
http://docs.aws.amazon.com/redshift/latest/dg/PG_CANCEL_BACKEND.html
Related
We are controlling a Keithley DMM6500 using the pyVisa library. In our setup, we are keeping an iPython kernel running (through Spyder).
The problem we're running into is the following: whenever a function that interacts with the DMM encounters an unhandled exception (like a KeyboardInterrupt), any subsequent calls to the DMM result in the error VI_ERROR_SYSTEM_ERROR (-1073807360): Unknown system error (miscellaneous error).
In order to fix this, we have tried to call device.clear() and device.close() / device.open(), but this doesn't seem to work. Even rebooting the device does not work. The only thing that fixes the issue, it seems, is to completely restart our iPython kernel.
Is there any way to programmatically restore communication with the device, such that we can avoid having to reboot the ipython kernel?
Some of your question is unclear so my answer might not help, however, it sounds like the terminal is locking the connection and you're loosing the reference.
The two way I have done this in the past:
Open the connection when talking to the device and close the connection when finished. This is useful if your connection is unstable but takes a fraction longer to open and close the connection a lot.
2)In your program you should have a try/except to handle the connection to the insturument and when the program errors you need to close the connection so that it doesn't become locked.
example:
try:
run_program()
except:
close_connection_to_all devices() # build a function to clear connection to all devices
dump_any_unsaved_data() # maybe you want to dump some of the variable to see what the data was when it errored for debug
I am not sure if this should be posted as an issue on the MongoDB github or should be posted here, so please bare with me.
What I want:
My first goal was to restart the monogd server using the Mongo Python API (pymongo). I guess that is theoretically impossible since after a shutdown, the API loses any link to the main server and can't ask it to restart.
So, I changed plans:
I just wanted to shutdown the server from my python code, pause the code (using input('Press enter')) allowing user to manually restart the mongod server and then press enter to continue the code execution after the user has restarted the server.
The way I "shutdown" the server from pymongo and wait for manual restart inside the code is as follows:
shell = MongoClient(uri)
shell['admin'].command('shutdown')
input("Press Enter button after restart.")
Problem is:
I keep getting an "AutoReconnect" error when doing this. Even after catching the error as follows:
try:
shell['admin'].command('shutdown')
except Exception as e:
if "closed" in str(e):
print("[UPDATE] Server was successfully shutdown. Please restart it.")
input("Press Enter button after you restart it.")
else:
raise e
Even after that, multiple "AutoReconnect" errors are raised. Without affecting the functionality of the code this time, just printing to stderr.
So, my question(s) are:
Is there a better way to restart the server from Pymongo? Possibly one that does not involve humnan intervention ?
How can I turn off the autoreconnect errors thrown ? Could not find any updated method to do so.
I know that using Firebird 2.5+ I can check if there are users accessing my database using SQL, but unfortunately, Firebird 2.0 doesn't have this feature. Yes, I know it's an old version, but it's a legacy software and I'm not allowed to upgrade this in a short time... :(
I need to know if someone is connected to my 2.0 Firebird database, due to a process I'll run:
Block connections to DB (but ONLY if no one is connected)
Run my process
Allow users to reconnect again
I can start my process only when there are no users connected.
My database is part of a client/server system (no Web).
Any hints?
-at[tach] : this parameter prevents any new connections to the database from being made with the exception of the SYSDBA and the database owner. The shutdown will fail if there are any sessions connected after the timeout period has expired. It makes no difference if those connected sessions belong to the SYSDBA, the database owner or any other user. Any connections remaining will terminate the shutdown with the following details:
https://firebirdsql.org/manual/gfix-dbstartstop.html
There is also Services API to do it so your database access library should expose the shutdown function. Specify a short shutdown, and if it failed - then there were some users. If it succeeded - now you can go on with maintenance, having a warranty client applications will not be able to connect.
Alternatively you can upgrade Firebird 2.0 -> 2.1 which is more close to 2.0 than 2.5 but already have Monitoring Tables implemented.
However this your approach has one weak point - race conditions. Using M.T. you envision your work as following:
Keep querying M.T. (which slows down server work significantly) until there are no other connections.
start maintenance work, that would fail if other connections are active
complete maintenance work
Problem is, that even after at step 1 you gained "no other connection" state, it does not mean that between steps 1 and 2, and especially between steps 2 and 3 now new connections would be made.
Even if you made your checks and ensure #1 condition, when you would go on with maintenance there would be some new user connected back and working now. Not every time of course, but as time goes by it will eventually happen one day.
But there is yet one more good thing in FB 2.1 - database-level triggers.
c:\Program Files\Firebird\Firebird_2_1\doc\sql.extensions\README.db_triggers.txt
You can create a regular "all_current_connections" table, using on connect and on disconnect triggers to keep it up to date.
You perhaps would also have to add some logic to your applications, so they would update that table with your internal application ID, to tell main workflow apps/connections from servicing utilities. However it is also possible that mere CURRENT_USER and CURRENT_CONNECTION pair, which the trigger knows and can store to the table, would be enough for that table, if you can infer kind of application from mere user name.
Then on disconnect trigger might be checking whether all "main workflow" apps disconnected and POST_EVENT to notify servicing utilities. However those utilities would still have to shutdown the database first, anyway.
You can shut down the database using gfix. The gfix tool will try to shutdown the database and if connections still exist after a timeout, the shutdown will fail.
For example, use:
gfix -shut -attach 5 <your-database>
This will:
prevent new connection being created,
wait 5 seconds for the existing connections to end,
if after 5 seconds there are still active connections the shutdown will abort,
otherwise, after 5 seconds the database will be shut down.
After shutdown, only SYSDBA or the database owner can create a connection to the database. This is only a viable option if your application it self doesn't use SYSDBA or the database owner account.
You bring the database back online using:
gfix -online <your-database>
For more information, see also Gfix - Database Housekeeping: Database Startup and Shutdown
Well, not an elegant way, but works...
I try to rename the database file.
If there is someone accessing the database, the rename operation will give me
an exception, saying that the file is in use by some process.
If rename succeeds, new users will not be able to access the database
anymore (the connection string used by my systems is not changed).
I run the exclusive process I have to.
Rename the database file to its original name, allowing new users to
connect again.
I post my solution in the hope that helps someone facing a similar problem.
Our new version of the product will probably a Web application and the database was not choosen yet, but certainly will no be Firebird.
Thanks to all that tried to give me an answer.
I have a server that runs Postgresql. in the logs I am seeing this message for my resque based 'worker' box, multiple times a minute. Some minutes there isn't a message, others could be 10 times.
2016-01-12 13:40:36 EST:1.1.8.2(33899):[16141]: LOG: could not receive data from client: Connection reset by peer
Now when i go into the 1.1.8.2 box to look at netstat -ntp i don't see a port 33899, and most of them are at least in the 40xxx range by now. That may be conjecture but I'm at a loss to find out why a Redis/Resque/Puma Rails stack would be printing out these messages, let alone what that means even if i get to the bottom of it.
Will I gain memory back if they are closed 'normally'?
Is this a thing to be wary of?
How does one debug OLD ports that are open when the db box and the worker box both don't display the ports any more?
This message is probably due to the resque worker task not closing the database connection before it exits. It's not a huge problem, but presumably Postgres is doing a little extra work to clean it up, and it makes a mess of your log file...
One solution is to add a hook to your resque worker's task file (the same file that contains the self.perform definition):
def self.after_perform(*args)
ActiveRecord::Base.connection.disconnect!
end
I’m working on a experiment regarding to a course I’m taking about tuning DB2. I’m using the EC2 from Amazon (aws) to conduct the experiment.
My problem is, however, that I have to test a non-compression against row-compression in DB2 and to do that I’ve created a bsh file that run those experiments. But when I reach to my compression part I get the error ”Transaction log is full”; and no matter how low I set the inserts for it is complaining about my transaction log.
I’ve scouted Google for a day now trying to find some way to flush / clear the log or just get rit of it, i don’t need it. I’ve tried to increase the size but nothing has helped.
Please, I hope someone has an answer to solve this frustrating problem
Thanks
- Mestika
There is no need to "clear the log" in DB2. When a transaction is rolled back, DB2 releases the log space used by the transaction.
If you've increased the log size and it has not helped, please post more information about what you're trying to do.
No need of restarting. Just try to force the applications using DB2 force applications all.
Increase the Actie Log File Size and try to force application connections and terminate the connections.
Try to run the job now.
db2 force applications all
db2 update db cfg for sample using logfilsiz 5125
db2 force applications all
db2 terminate
db2 connect to sample
Run your job and monitor.
Just restart the instance, it would release the pending logs and you should be fine