If I close a file and then reopen it, I cannot write more data to it after reopening it, but if i keep it open i can write as many lines as i want then close it when i am finish writing.
See the example below. Thanks.
if (f_mount(&FatFs, "", 1) == FR_OK) {
f_mkdir ("TEST");
count = 0;
while(count < 200){
if(f_open(&fil, "TEST/test.txt", FA_OPEN_ALWAYS | FA_WRITE) != FR_OK){
break;
}
else{
sprintf(array,"This is file entry number: %d\r\n",count);
f_puts(array, &fil);
if(f_close(&fil) != FR_OK){
break;
}
}
count++;
}
f_mount(0, "", 1);
}
It will count to the max value but it will only write the last entry which is 199.
You need to set your open mode so that it appends to the file rather than writing at the start:
From f_open
FA_OPEN_APPEND Same as FA_OPEN_ALWAYS except read/write pointer is set end of the file.
When you open the file with this:
f_open(&fil, "TEST/test.txt", FA_OPEN_ALWAYS | FA_WRITE);
you are opening the file for writing with the write pointer at the start of the file, so when you go to write to the file with:
f_puts(array, &fil);
you overwrite the previous data in the file.
If you change your open to:
f_open(&fil, "TEST/test.txt", FA_OPEN_APPEND | FA_WRITE);
then you should get the behavior you desire. There is an exception, though, and that's that each time running this, you will continue appending to the file. If that isn't desired, you may need to delete the file first or open it initially with FA_OPEN_ALWAYS and then re-open each pass with FA_OPEN_APPEND.
Depending on what you are trying to do, you should take a look at f_sync, which will perform all clean up and writes that an f_close would perform, but keeps the file open. From the documentation:
This is suitable for the applications that open files for a long time in write mode, such as data logger. Performing f_sync function of periodic or immediataly after f_write function can minimize the risk of data loss due to a sudden blackout or an unintentional media removal.
This would cover nearly every case I can think of for why you might be repeatedly opening and closing a file to append data, so this may be a better solution to your problem.
Related
i'm using this on push button click in Qt:
QString fileName = QFileDialog::getSaveFileName(this, tr("Save file"),
QString(),
tr("(*.csv));
it take 3-4 second to open the save Form. why? how i can improve speed?
Try to connect this slot to a QPushButton:
void MainWindow::openFD(){
QTime myopentime=QTime::currentTime();
for (int i=0;i<1000000;i++) ;//comment out this line for the real test
QString fileName = QFileDialog::getSaveFileName(this,QString(),QString(myopentime.toString("m.s.zzz")+" "+QTime::currentTime().toString("m.s.zzz"))
,tr("*.csv") );
}
Then try to run it with and without the count part.
You will see, that the count part adds some ms... without it there is not even ms (mili-second) of difference in the time shown in filename place.
So, you have to search somewhere else in your code or give us a better example.
PS:
1) You will need :
#include <QTime>
2) The line that counting is there to show you that my method works (since you will see some difference when uncomment)
Result and Answer: The problem is not on QFileDialog if this test will not give you differences but somewhere else in your code or possibly in the Window manager of your OS.
I am using Mirth 3.0.1 version. I am reading a file (using File Reader) having 34,000 records. Every record is having 45 columns and are pipe(|) separated. Mirth is taking too much time while reading the file from the disk. Mirth is installed on the same server where file is located.Earlier, I was facing the java head space issue which I resolved after setting the -Xms1024m -Xmx4096m in files mcserver.vmoptions & mcservice.vmoptions. Now I have to solve reading performance issue. Please find in attachment the channel for the same.
The answer to this problem is highly dependent on the solution itself. As an example, if you are doing transformations when you benchmark, it might be that the problem is not with reading the files, but rather with doing massive amounts of filtering and transformations in Mirth. Since Mirth converts everything you configure into basically one gigantic Javascript that executes on the server, it might just as well be that this is causing the performance problem. Pre-processor scripts might also create a problem if you do something that causes Mirth to read the whole file.
It migh also be that your 34.000 lines in the file contains huge quantities of information, simply making the file very big and extensive to process. If every record in the file is supposed to create new messages within Mirth, you might also want to check your batch settings for the reader.
And in addition to this, the performance of the read operations from disk is of course affected a lot by the infrastructure and hardware of the platform itself. You did mention that you are reading the files locally and that you had to increase the memory for Mirth. All of this could of course be a problem in itself. To make a benchmark you would want to compare this to something else. Maybe write a small Java program to just read the file to compare performance outside of Mirth.
Thanks for the suggestions.
I have used router.routeMessage('channelName','PartOfMsg') to route the 5000 records(from one channel to second channel) from the file having 34000 of records. This has helped to read faster from the file and processing the records at the same time.
For Mirth Community, below is the code to route the msg from one channel to other channel, this solution is also for the requirement if you have bulk of records to process in batches
In Source Transformer,
debug = "ON";
XML.ignoreWhitespace = true;
logger.debug('Inside source transformer "SplitFileIntoFiles" of channel: SplitFile');
var
subSegmentCounter = 0,
xmlMessageProcessCounter = 0,
singleFileLimit = 5000,
isError = false,
xmlMessageProcess = new XML(<delimited><row><column1></column1><column2></column2></row></delimited>),
newSubSegment = <row><column1></column1><column2></column2></row>,
totalPatientRecords = msg.children().length();
logger.debug('Total number of records found in the patient input file are: ');
logger.debug(totalPatientRecords);
try{
for each (seg in msg.children())
{
xmlMessageProcess.appendChild(newSubSegment);
xmlMessageProcess['row'][xmlMessageProcessCounter] = msg['row'][subSegmentCounter];
if (xmlMessageProcessCounter == singleFileLimit -1)
{
logger.debug('Now sending the 5000 records to the next channel from channel DOR Batch File Process IHI');
router.routeMessage('DOR SendPatientsToMedicare',xmlMessageProcess);
logger.debug('After sending the 5000 records to the next channel from channel DOR Batch File Process IHI');
xmlMessageProcessCounter = 0;
delete xmlMessageProcess['row'];
}
subSegmentCounter++;
xmlMessageProcessCounter++;
}// End of FOR loop
}// End of try block
catch (exception)
{
logger.error('The exception has been raised in source transformer "SplitFileIntoFiles" of channel: SplitFile');
logger.error(exception);
globalChannelMap.put('isFailed',true);
globalChannelMap.put('errDesc',exception);
return true;
}
if (xmlMessageProcessCounter > 1)
{
try
{
logger.debug('Now sending the remaining records to the next channel from channel DOR Batch File Process IHI');
router.routeMessage('DOR SendPatientsToMedicare',xmlMessageProcess);
logger.debug('After sending the remaining records to the next channel from channel DOR Batch File Process IHI');
delete xmlMessageProcess['row'];
}
catch (exception)
{
logger.error('The exception has been raised in source transformer "SplitFileIntoFiles" of channel: SplitFile');
logger.error(exception);
globalChannelMap.put('isFailed',true);
globalChannelMap.put('errDesc',exception);
return true;
}
}
return true;
// End of JavaScript
Hope, this will help.
I have an API to login to a system. It doesn't support concurrent login with the same user id (I guess due to license). However this code can be called by different processes/clients launched by different users from another system, in my case, a ClearCase trigger.
my $conn = new BuildForge::Services::Connection('ccbuildforged01', 3966);
my $token = $conn->authUser('bldforge', 'password');
I have two choices.
The $token returned can be shared by different clients. So how can I persistent this $token?
I have 10 license, so can create 10 users. How can I create a file based persistent stack for all client to share these user ids?
I googled a bit and found this:
A single, simple file and a lock seems all you need. You push by lock,append,unlock. You pop by lock,seek,read,truncate,unlock.
Can someone give me a code sample?
I would maintain ten files (say 1.conf though 10.conf) with the user information.
To get an available user id, look for a .conf file with no corresponding .pid file (e.g. 1.pid). If you find one, try to get an exclusive lock on the file, and then create a corresponding .pid file with an exclusive lock on it. (If any of these fail, look for another file.)
When you are done, release the lock on the .conf file, then release the lock and delete the .pid file.
If you want to avoid a possible race condition, you could have a queue.lock file that you try to lock exclusively before looking for an available user id. If it's already locked, wait for the lock to be released.
Why we need the extra .pid file? Isn't lock on the .conf file enough?
Using the following code, if I start two instance of this program at the same time, the 2nd one wait for the 1st to unlock, then lock the first file id01.txt. It's waiting to read. How can I ask it go to the next one if a file is locked?
use FileHandle;
use Fcntl qw(:flock);
for ($count = 1; $count <= 8; $count++) {
if (open SELF, "< id0$count.txt");
if (flock(SELF, LOCK_EX)) {
# Exclusive lock
print "Locked id0$count.txt...\n";
sleep(10);
close SELF;
} else {
next
}
} else {
next;
}
print "Unlocked id0$count.txt...\n";
}
I want to implement the already defined system calls in PintOS ( halt(), create()...etc defined in pintos/src/lib/user/syscall.c ). The current system call handler in pintos/src/userprog/syscall.c does not do anything. How do I make a process that makes system calls. Further I need to myself add a few system calls. How do I proceed in that too. But first I need to implement the existing system calls.
The default implementation in pintos terminates the calling process.
goto this link.There is explanation on where to modify the code to implement the system calls.
The "src/examples" directory contains a few sample user programs.
The "Makefile" in this directory compiles the provided examples, and you can edit it compile your own programs as well.
This program/process when run will inturn make a system call.
Use gdb to follow the execution of one such program a simple printf statement will eventually call write system call to STDOUT file.
The link given also has pointers on how to run pintos on gdb, my guess is you are using either bochs or qemu.In any case just run the gdb once with a simple hello world program running on pintos.
This will give u an idea of how the system call is made.
static void
syscall_handler (struct intr_frame *f)// UNUSED)
{
int *p=f->esp;
switch(*p)
case *p=SYS_CREATE // NUMBER # DEFINED
const char *name=*(p+1); //extract the filename
if(name==NULL||*name==NULL)
exit(-1);
off_t size=(int32_t)*(p+2);//extract file size
f->eax=filesys_create(name,size,_FILE); //call filesys_create
//eax will have the return value
}
This is pseudo code for sys_create .. all file system related system call are very trivial,
Filesys realted system calls like open read write close needs you to translate file to their corresponding fd (file descriptor). You need to add a file table for each process to keep track this, this can either be preprocess data or a global data.(UR choice),
case (*p==SYS_WRITE)
{
// printf("wite syscall\n");
char *buffer=*(p+2);
unsigned size=*(p+3);
int fd=*(p+1);
// getiing the fd of specified file
struct file *fil= thread_current()->fdtable[fd];/ my per thread fdtable
if(fd==1) goto here;
if(is_directory(fil->inode)){
exit(-1);
goto done;
}
here:
if(buffer>=PHYS_BASE)exit(-1);
if(fd<0||fd>=128){exit(-1);}
if(fd==0){exit(-1);} // writing to STDIN
if(fd==1) //writing to STDOUT
{
int a=(int)size;
while(a>=100)
{
putbuf(buffer,100);
buffer=buffer+100;
a-=100;
}
putbuf(buffer,a);
f->eax=(int)size;
}
else
if(thread_current()->fdtable[fd]==NULL)
{f->eax=-1;}
else
{
f->eax=file_write(thread_current()->fdtable[fd],buffer,(off_t)size);
}
done: ;
}//printf("write");} /* Write to a file. */
Open - adds anew entry to fdtable and return the fd number u give to the file,
close - remove that entry from fd table
read - similar to write.
The process_create ,wait are not simple to implement...
Cheers :)
How would I validate that a jpg file is a valid image file. We are having files written to a directory using FTP, but we seem to be picking up the file before it has finished writing it, creating invalid images. I need to be able to identify when it is no longer being written to. Any ideas?
Easiest way might just be to write the file to a temporary directory and then move it to the real directory after the write is finished.
Or you could check here.
JPEG::Error
[arguments: none] If the file reference remains undefined after a call to new, the file is to be considered not parseable by this module, and one should issue some error message and go to another file. An error message explaining the reason of the failure can be retrieved with the Error method:
EDIT:
Image::TestJPG might be even better.
You're solving the wrong problem, I think.
What you should be doing is figuring out how to tell when whatever FTPd you're using is done writing the file - that way when you come to have the same problem for (say) GIFs, DOCs or MPEGs, you don't have to fix it again.
Precisely how you do that depends rather crucially on what FTPd on what OS you're running. Some do, I believe, have hooks you can set to trigger when an upload's done.
If you can run your own FTPd, Net::FTPServer or POE::Component::Server::FTP are customizable to do the right thing.
In the absence of that:
1) try tailing the logs with a Perl script that looks for 'upload complete' messages
2) use something like lsof or fuser to check whether anything is locking a file before you try and copy it.
Again looking at the FTP issue rather than the JPG issue.
I check the timestamp on the file to make sure it hasn't been modified in the last X (5) mins - that way I can be reasonably sure they've finished uploading
# time in seconds that the file was last modified
my $last_modified = (stat("$path/$file"))[9];
# get the time in secs since epoch (ie 1970)
my $epoch_time = time();
# ensure file's not been modified during the last 5 mins, ie still uploading
unless ( $last_modified >= ($epoch_time - 300)) {
# move / edit or what ever
}
I had something similar come up once, more or less what I did was:
var oldImageSize = 0;
var currentImageSize;
while((currentImageSize = checkImageSize(imageFile)) != oldImageSize){
oldImageSize = currentImageSize;
sleep 10;
}
processImage(imageFile);
Have the FTP process set the readonly flag, then only work with files that have the readonly flag set.