Tableau Extract not found due to temporary storage location - tableau-api

I am relatively new to Tableau and may have encountered some data loss .. but let us proceed to see if there might be some means to salvage the data.
Upon re-loading a workbook after a couple of weeks of non use we can see reference to an Extract load attempt from a temporary (OS/X) location:
Now I had not realized that the Extract were not being saved with the .twb itself - and even less that it were in a transient disk location.
So .. is the data gone? Secondarily - did I miss some step that would nudge me to save the extract to a non-volatile/non temporary location on disk?

Your file location shows you are accessing file from temporary location and you are loading your workbook after a couple of week.
So may be your OS deleted your files.
Try to save your files in safe location rather then accessing from temporary location.

Related

Does seafile store synced files anywhere?

I'm using Seafile (on docker) to sync some files to a Synology nas and it is all working correctly. I've created an external folder that is pointed to /shared folder in the container.
I think I already know the answer, but are the files synced to the server stored 'normally' somewhere? i.e. If I sync a folder called 'photos' and it has 'a.jpg' in it, will I be able to find that file on the seafile server?
The reason for the question is I would like to backup the original files that are sync'd, rather than having to backup the seafile DB, etc.
(I am aware that syncthing does what I want, so I may choose to use that instead, just want to confirm my understanding)
Thanks
TLDR;
No you won't find your a.jpg file on the server. Your files are going to be turned into blocks of bytes.
To understand
If you take a look at this part of the documentation of data model
FS
There are two types of FS objects, SeafDir Object and Seafile Object. SeafDir Object represents a directory, and Seafile Object represents a file.
Block
A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 1MB.
So backing up files will won't be as easy as making a raw copy of the seafile drive. As mentioned by #JensV you may still achieve something along those lines using the seafile drive client.

Saving image in database

Is it good to save an image in database with type BLOB?
or save only the path and copy the image in specific directory?
Which way is the best (I mean good performance for the database and the application) and why?
What are your requirements?
In the vast majority of cases saving the path will be better, simply because of the sheer size of the files compared to the rest of data (bulge the DB by GBs due to image inclusion). Consider adding an indirection, eg. save the path as a name and a reference to a storage resource (eg. a storage_id referencing a row in storages tables) and the path attached to the 'storage'. This way you can easily move files (copy all files, then update the storage path, rather than update 1MM individual paths).
However, if your requirements include consistent backup/restore and/or disaster recoverability, is often better to store images in the DB. Is not easier, nor more convenient, but is simply going to be required. Each DB has its own way of dealing with this problem, eg. in SQL Server you would use a FILESTREAM type which allows remote access via file access API. See FILESTREAM MVC: Download and Upload images from SQL Server for an example.
Also, a somehow dated but none the less interesting paper on the topic: To BLOB or Not to BLOB.

Amazon S3 (AWS ) NSMutableData

I have a project related on Amazon S3 DOWNLOADING big file sizes above 50MB. It stops without error and I chunk the file into smaller memory due to it's large data file size and download it simultaneously. When I append the chunk data into single [NSMutableData] in correct order
the video won't play. Any Idea about this related subject?..
Please Help me I'm sitting my ass for the whole week of this project T_T..
You shouldn't manage this amount of data using RAM memory only.
You'd rather use secondary memory (namely NSFileManager) as explained here
When you're done downloading the file, play it normally. If you're sure the user won't really need it anymore, just delete it right after playback.
[edit]
Or,you might as well just use MPMoviePlayerController pointing to that URL directly.
What you need to do is create a file of the appropriate size first. Each down loader object must know the offset in the file to put the data, which it should write as it appears and not store in a mutable data object. So this will greatly lower the memory footprint of this operation.
There is a second component: you must set the F_NOCACHE flag of the open file so iOS does not keep the file writes in its cache.
With both of these it should work fine. Also use a lot of asserts during development so you know ASAP if something fails - so ou can correct whatever the problem is.

Verify .mat file exists and is not Corrupt - Matlab

I have 2 independent Matlab workers, with FIRST getting/saving data and SECOND reading it (and doing some calculations etc).
FIRST saves data as .mat file on the hard-disk while SECOND reads it from there. It takes ~20 seconds to SAVE this data as .mat and 8millisec to DELETE it. Before SAVING data, FIRST deletes the old file and then saves a newer version.
How can the SECOND verify that data exists and is not corrupt? I can use exists but that doesn't tell me if the data is corrupt or not. For eg, if SECOND tries to read data exactly when FIRST is saving it, exists passes but LOAD gives you an error saying - Data Corrupt etc.
Thanks.
You can't, without some synchronization mechanism - by the time SECOND completes its check and starts to read the file, FIRST might have started writing it again. You need some sort of lock or mutex.
Two options for base Matlab.
If this is on a local filesystem, you could use a separate lock file sitting next to the data file to manage concurrent access to the data file. Use Java's NIO FileChannel and FileLock objects from within Matlab to lock the first byte of the lock file and use that as a semaphore to control access to the data file, so the reader waits until the writer is finished and vice versa. (If this is on a network filesystem, don't try this - file locking may seem to work but usually is not officially supported and in my experience is unreliable.)
Or you could just put a try/catch around your load() call and have it pause a few seconds and retry if you get a corrupt file error. The .mat file format is such that you won't get a partial read if the writer is still writing it; you'll get that corrupt file error. So you could use this as a lazy sort of collision detection and backoff. This is what I usually do.
To reduce the window of contention, consider having FIRST write to a temporary file in the same directory, and then use a rename to move it to its final destination. That way the file is only unavailable during a quick filesystem move operation, not the 20 seconds of data writing. If you have multiple writers, stick the PID and hostname in the temp file name to avoid collisions.
Sounds like a classic resource sharing problem between 2 threads (R-W)
In short, you should find a method of inter-workers safe communication. Check this out.
Also, try to type
showdemo('paralleldemo_communic_prof')
in Matlab

coredata vs file access

I have 100s of file which needs to be accessed for displaying the content on iphone. They are all plists.
Which one is faster core data or file access ? which one is secured ?
You have to consider the file size first, a nice rule of thumb found in these boards is, if the file is under 100kB you can store it as an attribute in an entity as a BLOB, if it is greater that that you maybe want to create a ad-hoc entity for it, and in the end if it exceeds 1 MB in size you can access it through the filesystem.
Secondly, you shall evaluate the cost of the operation too, 100 files may appear many but if you access them few times, maybe file access is the way to go, on the other hand if you need that stored information multiple times frequently but you can even create ad hoc entities for Core Data and load the files at start up. And so on.
This is a nice book on Core Data. You can find many guide lines by reading it, but keep in mind also the general guide lines of designing databases.
If they are static files I would recommend pre-loading them into a Core Data SQLite file. That would yield far better performance, especially if you structure your model properly.