I have files that are updated every 2 hours. I have to detect the files automatically and insert the extracted information from them into a database.
Our DBMS is Postgresql and programming language is Python. How would you suggest I do that?
I want to make use of DAL (Database Abstraction Layer) to make connection between the files and database and use postgresql LISTEN/NOTIFY techniques to detect the new files. If you agree with me please tell me how I can use LISTEN/NOTIFY functions to detect the files.
Thank you
What you need is to write a script that stays running as a dæmon, using a file system notify API to run a callback function when the files change. When the script is notified that the files change it should connect to PostgreSQL and do the required work, then go back to sleep waiting for the next change.
The only truly cross platform way to watch a directory for changes is to use a delay loop to poll os.listdir and os.stat to check for new files and updated modification times. This is a waste of power and disk I/O; it also gets slow for big sets of files. If your OS reliably changes the directory modification time whenever files within the directory change you can just os.stat the directory in a delay-loop, which helps.
It's much better to use an operating system specific notification API. Were you using Java I'd tell you to use the NIO2 watch service, which handles all the platform specifics for you. It looks like Watchdog may offer something similar for Python, but I haven't needed to do directory change notification in my Python coding so I haven't tested it. If it doesn't work out you can use platform-specific techniques like inotify/dnotify for Linux, and the various watcher APIs for Windows.
See also:
How do I watch a file for changes?
Python daemon to watch a folder and update a database
You can't use LISTEN/NOTIFY because that can only send messages from within the database and your files obviously aren't in there.
You'll want to have your python script scan the directory the files are in and check their modification time (mtime). If they are updated, you'll need to read in the files, parse the data and insert it to the db. Without knowing the format of the files, there's no way to be more specific.
Related
I am using the book "Mastering Akka" by Christian Baxter.
Now I try to build a new project with akka as event sourced system.
I have a object like Folder. In this Folder could be a number of Files. Really files (java.io.File). For a locale system no problem. But I try to build a distributed system. User A set up a database and gets access to his database. Where are the files? Because user A does not sitting on his desktop pc (where he saved the Folder). He is sitting at a notebook on his homeoffice. Now he needs these files inside Folder.
First I thought to save the files as Array[Byte]. But whats about the situation the file is 150MB? And maybe there are 20 files inside folder and all with more then 150MB? I think my RAM is not working for so long without a crash.
Their is no http server. Also maybe I could ask the server to deliver the files as stream? But this needs settings. Is that the best way?
What is the best practice to handle multiple distrubted and/or large files on av event sourced actor?
I am building an internal iOS application (so - it won't ever be in the app store), and I need to keep a directory of content synchronized between a server and each of the instances of the iOS application. This would be easy enough if I just wanted to delete and re-download this content each time, but I would rather use something similar to rsync to only download the elements that have changed.
I haven't found any good way to utilize rsync. I considered looking at Objective-Git as a possibility here, but at a quick glance it looked like there is still a lot of the support for remote repositories that isn't supported yet.
As a final note, while this won't be in the app store, I will not be jailbreaking these devices and I would prefer to not rely on any private API's (although if there was an elegant solution that utilized private API's I might consider it).
Thoughts?
ADDITIONAL NOTE: This needs to be an isolated solution. I won't be relying on outside services (like Dropbox, Box.net, etc...). This needs to work solely between the device and the server (which is on a local network with the device).
Use HTTP to list the contents of each folder on the server.
Compare last modification time of each file with those on the device, and identify added/removed files.
Get added and modified files, remove deleted files.
It sounds like you're maybe asking for a library that already does this, but if you don't find one it's obviously moderately easy to write this from the ground up using stat(2) on the server and the same or a higher-level equivalent on the iOS devices. Have the iPhone send a tree of files with their modification date to the server and get back a list of insert/delete/update operations to do with the url (or whatever) for each one so you can do them incrementally on a background thread. Have the information from the server for new/updated files include the mod date that the server has so you can set it to be the same on the iOS device and send that when asking the server for the status of each file (kind of hack using the file system to store that, but it works).
Why not just set up a RESTful interface and do it across HTTP; that way you could query the modification times easily enough to determine whether client or server files need to be updated. You might also want to keep track of what files on the client have been synced, so you can easily know which files to add or delete. This can be done with a simple .sync file or using a plist / sqlite / etc.
If you'll consider FTP, there are some pretty advanced client libraries available.
For example, the iOS Chilkat bundle includes an FTP client library that supports synchronization in both directions. It's not free, but it's pretty cheap -- and you get a ton of other stuff that will likely prove useful someday. Here's an example of iOS pulling down all additions and changes (mode 2):
http://www.example-code.com/ios/ftp_syncLocalTree.asp
One caveat -- judging solely from the example, it doesn't appear to synchronize deletions. If this is a requirement, you could do it yourself without too much effort immediately following a sync.
acrosync (see https://acrosync.com/library.html) seems like a good fit given the initial question, however I haven't used it myself yet.
I'm testing out Microsoft Sync Framework to try and see if it'll be suitable for a task that I'm working on. One of the things I'd like to be able to do is to have the option to not just send changed files, but instead to send all of the files (for example, if I'm syncing to a client machine for the first time, and so want to send all files).
I can't seem to find an example of this in the documentation, so any advice would be welcome.
if you're synching for the first time, then there is nothing special to configure as it will sync everything.
if you've already synched and want to re-send all files regardless of whether they've changed or not, just delete the metadata file and that should remove all knowledge of what has been synched.
I'm traveling all the time (every 2-3 months, I'm in a new city or country), with no real permanent address. I've managed to work out all the kinks over the last couple of years...except having a good backup/sync solution.
I have a macbook pro & a thinkpad w701 (which runs two different VMs). It's a pain in the ass because making changes on one machine (such as adding some new music or updating some presentations) requires me to keep track of what changed where. And then every couple of weeks, after syncing the three different images, I try to manually sync it out to a backup drive that I carry around.
It's pretty much the most annoying thing ever...especially when I sometimes make changes on the backup drive and I have to remember not to override them.
What I'd really like is something simple that has more of a version control like workflow:
I can push out changes to some
central server (like a commit.
Example: I add some changes to my
music directory and then I can just
commit those changes to backup)
Before the backup happens, I'd like to see a "diff": what files will be
overridden, which one's newer, etc
I can access my files off the server (if I'm making an audio mix and need
to pull out some songs, I'd like to get them from the server. All the
backups can't just be one big binary
compressed zip blob)
Dropbox comes pretty close but it lacks the "commit" & "diff" functionality. I thought about using Amazon AWS but that falls short because I can't see diffs and can't access my files directly off aws.
Any ideas? Or any other solutions? I guess what I'd really like is TimeMachine in the cloud or maybe even a NAS that's securely accessible through the internet
You might want to use rsync. It's a unix synchronization tool you can use it on windows and unix variants (including Mac OS X). It uses delta copying to minimize transfer and hardlinks to minimize backup size.
You can access all files in every backup as though they were normal files. Diffing can be done using traditional tools. It is all command-line based so if you don't want that you will need to find GUI tools, but I don't know which you could use.
You would need a server with a rsync deamon/service. I don't know if there are providers for it but you can set up your own VPS starting at a few dollars a month.
Have you looked at Amazon S3? S3 is data storage mechanism and there are bunch of tools to "sync" your local directory with S3. Some of the tools are:
http://www.vinodlive.com/2007/08/20/amazon-s3-storage-tools/
Out of these , S3Sync should do what you are looking for, I.e. submit only changed files and a mode that would tell you changes before submitting them.
I'm writing an application that monitors a directory for new input files by polling the directory every few seconds. New files may often be several megabytes, and so take some time to fully arrive in the input directory (eg: on copy from a remote share).
Is there a simple way to detect whether a file is currently in the process of being copied? Ideally any method would be platform and filesystem agnostic, but failing that specific strategies might be required for different platforms.
I've already considered taking two directory listings separaetd by a few seconds and comparing file sizes, but this introduces a time/reliability trade-off that my superiors aren't happy with unless there is no alternative.
For background, the application is being written as a set of Matlab M-files, so no JRE/CLR tricks I'm afraid...
Edit: files are arriving in the input directly by straight move/copy operation, either from a network drive or from another location on a local filesystem. This copy operation will probably be initiated by a human user rather than another application.
As a result, it's pretty difficult to place any responsibility on the file provider to add control files or use an intermediate staging area...
Conclusion: it seems like there's no easy way to do this, so I've settled for a belt-and-braces approach - a file is ready for processing if:
its size doesn't change in a certain period of time, and
it's possible to open the file in read-only mode (some copying processes place a lock on the file).
Thanks to everyone for their responses!
The safest method is to have the application(s) that put files in the directory first put them in a different, temporary directory, and then move them to the real one (which should be an atomic operation even when using FTP or file shares). You could also use naming conventions to achieve the same result within one directory.
Edit:
It really depends on the filesystem, on whether its copy functionality even has the concept of a "completed file". I don't know the SMB protocol well, but if it has that concept, you could write an app that exposes an SMB interface (or patch Samba) and an API to get notified for completed file copies. Probably a lot of work though.
This is a middleware problem as old as the hills, and the short answer is: no.
The two 'solutions' put the onus on the file-uploader: (1) upload the file in a staging directory and then move it into the destination directory (2) upload the file, and then create/upload a 'ready' file that indicates the state of the content file.
The 1st one is the better, but both are inelegant. The truth is that better communication media exist than the filesystem. Consider using some IPC that involves only a push or a pull (and not both, as does the filesystem) such as an HTTP POST, a JMS or MSMQ queue, etc. Furthermore, this can also be synchronous, allowing the process receiving the file to acknowledge the content, even check it for worthiness, and hand the client a receipt - this is the righteous road to non-repudiation. Follow this, and you will never suffer arguments over whether a file was or was not delivered to your server for processing.
M.
One simple possibility would be to poll at a fairly large interval (2 to 5 minutes) and only acknowledge the new file the second time you see it.
I don't know of a way in any OS to determine whether a file is still being copied, other than maybe checking if the file is locked.
How are the files getting there? Can you set an attribute on them as they are written and then change the attribute when write is complete? This would need to be done by the thing doing the writing ... which sounds like it isn't an option.
Otherwise, caching the listing and treating a file as new if it has the same file size for two consecutive listings is the best way I can think of.
Alternatively, you could use the modified time on the file - the file has to be new and have a modified time that is at least x in the past. But I think this will be about equivalent to caching the listing.
It you are polling the folder every few seconds, its not much of a time penalty is it? And its platform agnostic.
Also, linux only: http://www.linux.com/feature/144666
Like cron but for files. Not sure how it deals with your specific problem - but may be of use?
What is your OS. In unix you can use the "lsof" utility to determine if a user has the file open for write. Apparently somewhere in the MS Windows Process Explorer there is the same functionality.
Alternativly you could just try an exclusive open on the file and bail out of this fails. But this can be a little unreliable and its easy to tread on your own toes.