I having an input (.JSON,.jpg,.txt,etc..).That file i need to store into memcached server.
I don't want to convert file to Byte[] before passing to memcached server.
If any possibility to store file inside Memcached server?
I am using spymemcached tool.
Example:
MemcachedClient mc=new MemcachedClient(new InetSocketAddress(192.168.7.104",11211));
File file=new File("D:\\test.txt");
mc.set("Key1",3600,file);
mc.get("Key1");
You would have to read from the file and stored the read data in memcached. A File object is only a handle to the file and doesn't actually contain the data stored in the file.
Related
If I want to take the backup of MongoDB data files and transfer it to a different server how can we do that? In the data path, I can see a lot of files are there with the prefix collection, index, and ending with *.wt
Tried with all files but the service got stopped.
I'm trying to take the data from version 3.2 and want to restore the data in version 5
By using mongo import and export it's working. But the challenge is it can not be done in production data as the data size is 8TB+
For that looking for some solution if we can copy the data files only from the data path and send it to the version 5 data path.
I would like to use data factory to regularly download 500000 json files from a web API and store them in a blob storage container. Then I need to parse the json files to extract some values from each file and store these values together with an ID (part of filename) in a database. I can do this using a ForEach activity and run a custom activity for each file, but this is very slow, so I would prefer some batch activity which could run the same parsing code on each file. Is there some way to do this?
If your source json files have same schema, you can leverage the Copy Activity which can parse those files in a single run. But if possible, I would suggest to split those files into different sub folder (e.g. 1000 files per folder), so that each copy run needs less time and ease the management.
Refer to this doc for more details: https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview
I have a site that I created using mongodb but now I want to create a new site with MySQL. I want to retrieve data from my old site (the one using mongodb). I use RoboMongo software to connect to mongodb server but I don't see my old data (*.pdf, *.doc). I think that the data is in binary, isn't it?
How can I retrieve this data?
The binary data you've highlighted is stored using a convention called GridFS. Robomongo 0.8.x doesn't support decoding GridFS binary data (see: issue #255).
In order to extract the files you'll either need to:
use the command line mongofiles utility included with MongoDB. For example:
mongofiles list to see files stored
mongofiles get filename to get a specific file
use a different program or driver that supports GridFS
I am trying to understand how to correctly load a large file into a database. I understand how to get the file from the database and stream it back without using too many resources by using a DataReader to read into a buffer and then writing the buffer to the OutputStream.
When it comes to storing the file all of the examples I could find read then entire file into a byte array and then supply it to a data parameter.
Is there a way to store the file into a database without having to read then entire file into memory first?
I am using ASP.NET and Sql Server
If you can use .Net 4.5, there is new support for streaming. Also, see Using ADO's new Async methods which gives some complimentary examples.
i would like to get data from.cdb file. Is it possible to retrieve data from .cdb file without knowing keys names?
If you are talking about CDB constant database files, the cdbdump program will dump all data in cdbmake format on standard output.