Best way to stage file from cloud storage to windows machine - windows-xp

I am wanting to store a data file for Quickbooks in the cloud. I understand that the data file is more of a database-in-a-file, so I know that I don't want to simply have the data file itself in a cloud directory.
When I say 'cloud', I'm meaning something like Google Drive or box.com.
What I see working is that I want to write a script (bat file, or do they have something new and improved for Windows XP, like some .net nonsense or something?)
The script would:
1) Download the latest copy of the data file from cloud storage and put it in a directory on the local machine
2) Launch Quickbooks with that data file
3) When the user exits Quickbooks, copy the data file back up into the cloud storage.
4) Rejoice.
So, my question(s)... Is there something that already does this? Is there an easily scriptable interface to work with the cloud storage options? In my ideal world, I'd be able to say 'scp google-drive://blah/blah.dat localdir' and have it copy the file down, and do the opposite after running QB. I'm guessing I'm not going to get that.

Intuit already provides a product to do this. It is called Intuit Data Protect and it backs up your Quickbooks company file to the cloud for you.
http://appcenter.intuit.com/intuitdataprotect
regards,
Jarred

Related

how to do local and remote file storage in a flutter app

I'm doing a flutter app that needs to open a binary file, display the content to a user and allow them to edit and save. File size would be between 10K and 10 MB. The file also needs to be in the cloud for sharing and accessing from other devices. To minimise remote network egress data charges and also local user mobile data charges, I'm envisaging that when the user saves the file it would be saved locally rather than written to the cloud and only written to the cloud when the user closes the file or maybe at regular intervals or no activity. To minimise network data charges I would like to keep a permanent local copy and the remote copy of the file would have a small supporting file that identified by who and when the remote file was last written. When the app starts, it checks if its local copy is up to date by reading the supporting file. The data does not need high security.
The app will run on Android, IOS, the web and preferably on the desktop - though I know that google firebase SDK for Windows is incomplete/ unavailable.
Is google firebase cloud storage the easiest and best way to do this. If not what is the easiest way.
Are there any cloud storage providers that don't charge for network egress data, just for storage.

Azure Function temp file

When getting content from an endpoint in Azure Functions, I used to be able to save stuff locally and then handle that stuff before passing it to output in Azure Functions.
Now I have this setup:
1: What happens, when I call my endpoint
2: My function code
3: My call to the function
4: The contents of C:\Local\Temp AFTER the function has been called
According to my function code (2) the file C:\local\Temp\cspcustomer.parquet should exist, but when trying to read the file I obviously get an error.
Furthermore, when looking at the actual contents of C:\local\Temp in Kudu (4), the file is not really there.
My question is, where is my file, so I can continue my work?
Azure function app has different file system storage locations. Those are
D:\local
This is local temporary storage, and you cannot share any file from this storage. Temporary storage meaning, this storage goes away as soon as you delete your Azure function from the Virtual Machine. You can store up to 500MB data here.
If your Azure function has to say 3 instances, each one of these instances will actually run on its own virtual machine, and ultimately, each instance will have its own D:\local storage up to 500MB.
D:\Home
This storage is shared storage. There are no restrictions, All your Azure Function app instances will have access to this storage. In this storage case, If you will delete your Azure function app or moved it to somewhere else then the storage will not go away like in the case of D:\local. The storage will remain as it is.
You can use the Home directory instead of Local.
I can able to see the parquet file which is available in Home Directory
Refer here
Turns out this issue will be fixed once the PowerShell workeer is updated: https://github.com/Azure/azure-functions-powershell-worker/issues/733
The PowerShell version in Azure Function is a bit behind. It will be updated to 7.2 as required by the module.

How to download file from url and store it in aws s3 bucket?

as stated, I'm trying to download this dataset of zip folders containing images: https://data.broadinstitute.org/bbbc/BBBC006/ and store them in an s3 bucket so I can later unzip them in the bucket, reorganize them, and pull them in smaller chunks into a vm for some computation. Problem is, I don't know how to get the data from https://data.broadinstitute.org/bbbc/BBBC006/BBBC006_v1_images_z_00.zip for example or any of the other ones, to then send it s3
this is my first time using aws or really any cloud platform so please bear with me :]
Amazon EC2 provides a virtual computer just like a normal Linux or Windows computer.
Amazon S3 is a block storage service where you can upload/download files.
If you wish to copy files from a website to Amazon S3, you will need to write an application or script that will:
Download the files from the website
Upload them to Amazon S3
If you wish to do it from a script, you could use the AWS Command-Line Interface (CLI).
Or, you could do it from a programming language, see: SDKs and Programming Toolkits for AWS

CSV Importing With Rails, Postgres, and Sidekiq

I'm building a customer management system using Rails that requires CSV files containing customer information to be imported into/diffed with a Postgres database. I'm hosting the application on Heroku. I moved the database to the background with Sidekiq but need advice on where to upload the file to in the first place for importing. Is hosting the file on S3 really the best solution or is there a simpler solution without using a third party storage service? The application will be used daily but up 10 employees and the larges CSV file being upload is around 100,000 rows.
Thanks.
Yes, I do think S3 is the best solution
We faced same problem at Storemapper (we use Resque instead of Sidekiq, but that's not a problem). The limiting factor here is the Heroku request timeout. You only have 30s to finish your upload to Heroku, which put hard limit on how big your csv can be. This is where S3 come. Basically what we do is:
User upload csv directly to S3 via javascript, bypassing our app server on Heroku.
Once the upload complete, the javascript makes a request to app server that will launch background worker, telling the worker where the file is at S3
The worker download the csv from s3, then process it as necessary
I found carrierwave_direct gem to be very helpful for step 1 and 2. For step 3, I use smarter_csv gem. Checkout our complete story here:
https://tylertringas.com/very-large-csv-import-in-rails-on-heroku/

Restore full external ESENT backup

I've wrote the code that creates full backups of my ESENT database, using JetBeginExternalBackup API.
Following the MSDN guidelines, I backed up every file returned by JetGetAttachInfo and JetGetLogInfo.
I've made the backup, erased old database, and copied the backup data to the database folder.
The DB engine was unable to start, the JetInit error code is "JET_errMissingLogFile".
I've checked the backup, it only contains the database file, and "<inst>XXXXX.log" log files. It lacks the current log file (I'm using circular logging, BTW).
Is there any way to restore such backup?
I don't want to use JetExternalRestore API because it's too complex: I don't need to restore to another location, I don't understand why there're 3 input folders not 2, and I don't know the values to supply in genLow and genHigh arguments.
I do need external backups: the ESENT database is used by ASP.NET on a remote server, and I'm backing it up over the Internet.
Or, maybe there's a way to retrieve the name of the current log file, and I should just add it to the backup?
Thanks in advance!
P.S. I've got no permissions to span processes on my web server, so using eseutil.exe is not an option.
Unpack all backed up files to a single folder.
Take the name of your main database file. Replace extension to .pat. Create zero-length file with that name, e.g. database.pat.
After this simple step, call JetRestoreInstance API, it will restore the backup from that folder.