I'm using sql-developer to export just the DDl of an oracle database with 3 schema's in it.
The export ran for approx 12 hours, then popped up a message stating
File export.sql was not opened because it exceeds the maximum automatic open size
I've got 2 questions really
Has the export finished at this point?
If it hasn't, is there a way to increase the maximum automatic open size?
I haven't used sql developer to export a DDL before, so not sure if this is just the tool trying to open the file after the successful export.
Amny tips or help greatly appreciated.
Yes, the file is there on your disk, where you told us to put it.
There's no way to increase this limit.
You can open the file if you want to, but I'd caution against this if the file is very large...if you want to execute it, use the #file.sql notation.
If you want to browse it, use tail or head.
Related
So some of us dev's are starting to take over the management of some of our SQL Server boxes as we upgrade to SQL Server 2008 R2. In the past, we've manually reduced the log file sizes by using
USE [databaseName]
GO
DBCC SHRINKFILE('databaseName_log', 1)
BACKUP LOG databaseName WITH TRUNCATE_ONLY
DBCC SHRINKFILE('databaseName_log', 1)
and I'm sure you all know how the truncate only has been deprecated.
So the solutions that I've found so far are setting the recovery = simple, then shrink, then set it back... however, this one got away from us before we could get there.
Now we've got a full disk, and the mirroring that is going on is stuck in a half-completed, constantly erroring state where we can't alter any databases. We can't even open half of them in object explorer.
So from reading about it, the way around this happening in the future is to have a maintenance plan set up. (whoops. :/ ) but while we can create one, we can't start it with no disk space and SQL Server stuck in its erroring state (event viewer is showing it recording errors about 5 per second... this has been going on since last night.)
Anyone have any experience with this?
So you've kind of got a perfect storm of bad circumstances here in that you've already reached the point where SQL Server is unable to start. Normally at this point it's necessary to detach a database and move it to free space, but if you're unable to do that you're going to have to start breaking things and rebuilding.
If you have a mirror and a backup that is up to date, you're going to need to blast one unlucky database on the disk to get the instance back online. Once you have enough space, then take emergency measures to break any mirrors necessary to get the log files back to a manageable size and get them shrunk.
The above is very much emergency recovery and you've got to triple check that you have backups, transaction log backups, and logs anywhere you can so you don't lose any data.
Long term to manage the mirror you need to make sure that your mirrors are remaining synchronized, that full and transaction log backups are being taken, and potentially reconfiguring each database on the instance with a maximum file size where the sum of all log files does not exceed the available volume space.
Also, I would double check that your system databases are not on the same volume as your database data and log files. That should help with being able to start the instance when you have a full volume somewhere.
Bear in mind, if you are having to shrink your log files on a regular basis then there's already a problem that needs to be addressed.
Update: If everything is on the C: drive then consider reducing the size of the page file to get enough space to online the instance. Not sure what your setup is here.
I have a project related on Amazon S3 DOWNLOADING big file sizes above 50MB. It stops without error and I chunk the file into smaller memory due to it's large data file size and download it simultaneously. When I append the chunk data into single [NSMutableData] in correct order
the video won't play. Any Idea about this related subject?..
Please Help me I'm sitting my ass for the whole week of this project T_T..
You shouldn't manage this amount of data using RAM memory only.
You'd rather use secondary memory (namely NSFileManager) as explained here
When you're done downloading the file, play it normally. If you're sure the user won't really need it anymore, just delete it right after playback.
[edit]
Or,you might as well just use MPMoviePlayerController pointing to that URL directly.
What you need to do is create a file of the appropriate size first. Each down loader object must know the offset in the file to put the data, which it should write as it appears and not store in a mutable data object. So this will greatly lower the memory footprint of this operation.
There is a second component: you must set the F_NOCACHE flag of the open file so iOS does not keep the file writes in its cache.
With both of these it should work fine. Also use a lot of asserts during development so you know ASAP if something fails - so ou can correct whatever the problem is.
I've got some old code on a project I'm taking over.
One of my first tasks is to reduce the final size of the app binary.
Since the contents include a lot of text files (around 10.000 of them), my first thought was to create a database containing them all.
I'm not really used to SQLite and Core Data, so I've got basically two questions:
1 - Is my assumption correct? Should my SQLite file have a smaller size than all of the text files together?
2 - Is there any way of automating the task of getting them all into my newly created database (maybe using some kind of GUI or script), one file per record inside a single table?
I'm still experimenting with CoreData, but I've done a lot of searching already and could not find anything relevant to bringing everything together inside the database file. Doing that manually has proven no easy task already!
Thanks.
An alternative to using SQLite might be to use a zipfile instead. This is easy to create, and will surely safe space (and definitely reduce the number of files). There are several implementations of using zipfiles on the iphone, e.g. ziparchive or TWZipArchive.
1 - It probably won't be any smaller, but you can compress the files before storing them in the database. Or without the database for that matter.
2 - Sure. It's shouldn't be too hard to write a script to do that.
If you're looking for a SQLite bulk insert command to write your script for 2), there isn't one AFAIK. Prepared insert statments in a loop inside a transaction is the best you can do, I imagine it would take only a few seconds (if that) to insert 10,000 records.
I'm having problems exporting a 3305 page report (95000 records) using CR 8 to RTF.
When exporting a TXT file, it works.
But...
When exporting a large RTF, the program hangs at about 42% of the export process. Later it frees up the system, appears to finish, and outputs a file. The file itself is not complete (many records missing), and the formatting is gone (everything displays vertically, one word on top of another).
My setup has Windows XP SP2; Intel Pentium CPU 2.8G; about 512 RAM.. on another machine with twice that amount it only got to 43%.
When exporting a large DOC, the Reports module hangs at about 63% of the export process. Later it frees up the system, and outputs a file. The file itself is in Word 2.0, and I cannot open it on my screen.
Excel 8 is also a no go
Upgrading CR is not an option for me at this point.
The customer wants this feature to work, and is not presently willing to filter the report and export in smaller chunks (the nature of their work requires them to have it as one single document with a single date stamp at the bottom of the page, and other reasons.).
It seems like it could be a memory issue.
I also wonder if there isn't any limits to the size of an RTF, WORD or EXCEL file. I think EXCEL is only good up to 65000+ records per worksheet.
Any ideas?
P.S. - I had a look at the other suggested topics similar to this, and did not find the answer was looking for.
I also sent an email to Crystal Reports, but I think they're now owned by another company, which I wonder is support version 8. I thought I read elsewhere they were not. Does anyone know who is still supporting version 8?
Excel (pre-2007), at least, does have a max record count, and I think it's 65386 rows (Excel 2007: 1,048,576 rows and 16,384 columns). There may be similar limitations with Word, but I would think that's unlikely, and that the limitations are a result of the exporting functionality from your version of Crystal...
Also, I'm pretty sure you're SOL with getting support from SAP (owns CR) for version 8. In my travels working with Crystal Reports (from a distance), I've seen many issues with exporting from CR that have been (recently) corrected with updates the the ExportModeler library;
Good luck with finding some help with CR8; even though you'd mentioned upgrading CR is not an option, I think it'd be your only recourse... :(
Years ago I had a problem where the temp file that the Crystal Report was generating for very large exports took up all the available space on the hard drive. Check to see how much space you have on you temp drive (usually C:). You can also watch the disk space as the export occurs to see if it is chewing up the space. It wil lmagically stall (e.g. 42% complete) when it gets down to almost zero. After the process fials, the temp file is deleted and you disk space goes back to normal.
I have TFS installed on a single server and am running out of space on the disk. (We've been using the instance for about 2 years now.)
Looking at the tables in SQL Server what seems to be culprit is the tbl_content table, it is at 70 GB. If I do a get on the entire source tree for all projects it is only about 8 GB of data.
Is this just all the histories of the files? It seems like a 10:1 ratio just the histories...since I would think the deltas would be very small.
Does anyone know if that is a reasonable size given 8 GB of source (and 2 yrs of activity)? And if not what to look at to 'fix' this?
Thanks
I can't help with the ratio question at the moment, sorry. For a short-term fix you might check to see if there is any space within the DB files that can be freed up. You may have already, but if not..
SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;
If the statement above returns some space you want to recover you can look into a one time DBCC SHRINKDATABASE or DBCC SHRINKFILE along with scheduling routine SQL maintenance plan that may include defragmenting the database.
DBCC SHRINKDATABASE and DBCC SHRINKFILE aren't things you should do on a regular basis, because SQL Server needs some "swap" space to move things around for optimal performance. So neither should be relied upon as your long term fix, and both could cause some noticeable performance degradation of TFS response times.
JB
Are you seeing data growth every day, even when no activity occurs on the system? If the answer is yes, are you storing any binaries outside of the 8GB of source somewhere?
The reason that I ask is that if TFS is unable to calculate a delta or if the file exceeds the size of delta generation, TFS will duplicate the entire binary file. I don't have the link with me, but I have it on my work machine, which describes this scenario and how to fix it, in the event that this is the cause of your problems.