I am following up documentation in:
https://cloud.google.com/sdk/gcloud/reference/firestore/import
And I am trying to restore a collection pets in a backup from today. However I am getting this:
# gcloud firestore import gs://pets-backup/2021-02-26T02:27:05_54372/ --collection-ids='pets'
ERROR: (gcloud.firestore.import) INVALID_ARGUMENT: The requested kinds/namespaces are not available
I can confirm the gs-bucket exists and the collection pets too.
The error is not very helpful, I am not sure what I am dealing with.
I noticed within the export, there are folders /all_namespaces/all_kinds. When I try to import from these directly, I am getting:
gcloud firestore import 'gs://pets-backup/2021-02-26T02:27:05_54372/all_namespaces/all_kinds' --collection-ids='pets'
ERROR: (gcloud.firestore.import) NOT_FOUND: Google Cloud Storage file does not exist: /pets-backup/2021-02-26T02:27:05_54372/all_namespaces/all_kinds/all_kinds.overall_export_metadata
I can see there is only a file all_namespaces_all_kinds.export_metadata which doesn't match the file the import tool is looking for.
As confirmed by you in the comments, you are trying to extract a collection from and export of all the collections. Unfortunatelly this is currently not possible, as you can see in this documentation:
Only an export of specific collection groups supports an import of specific collection groups. You cannot import specific collections from an export of all documents.
If you'd like this to be changed, you can submit a Feature Request in Google's Issue Tracker so that they can consider this functionality to be implemented.
Related
Hi I'm trying to retrieve data from FireBase using the input by the user in the search bar.
The db struture is like this:
DB struture
Where every doc inside the collection "catalog" have one aditional collection named "recipes" and inside "recipes" are recipes with titles.
Example DB struture
How I can compare the input of the user with all recipes inside of all docs?
Currently using this code it works for 1 doc:
Code
If you want true full text search capabilities you will need to use a tool designed for that problem like Elastic Search, Typesense or Algolia.
Each of these are standalone services that can be used, but have very seamless integrations when using them as part of Firebase Extensions. Extensions will however need a credit card on file to be enabled.
Note that each of these may have cost implications, and limitations on how to structure your database. Algolia for example requires your text searchable entries to be in one collection.
You might then use a Firestore structure as follows:
foodTypes > meat > recipes > <recipe Id>
where meat and recipes are sub collections.
I am using sacred package in python, this allows to keep track of computational experiments i'm running. sacred allows to add observer (mongodb) which stores all sorts of information regarding the experiment (configuration, source files etc).
sacred allows to add artifacts to the db bt using sacred.Experiment.add_artifact(PATH_TO_FILE).
This command essentially adds the file to the DB.
I'm using MongoDB compass, I can access the experiment information and see that an artifact has been added. it contains two fields:
'name' and 'file_id' which contains an ObjectId. (see image)
I am attempting to access the stored file itself. i have noticed that under my db there is an additional sub-db called fs.files in it i can filter to find my ObjectId but it does not seem to allow me to access to content of the file itself.
Code example for GridFS (import gridfs, pymongo)
If you already have the ObjectId:
artifact = gridfs.GridFS(pymongo.MongoClient().sacred)).get(objectid)
To find the ObjectId for an artifact named filename with result as one entry of db.runs.find:
objectid = next(a['file_id'] for a in result['artifacts'] if a['name'] == filename)
MongoDB file storage is handled by "GridFS" which basically splits up files in chunks and stores them in a collection (fs.files).
Tutorial to access: http://api.mongodb.com/python/current/examples/gridfs.html
I wrote a small library called incense to access data from MongoDB stored via sacred. It is available on GitHub at https://github.com/JarnoRFB/incense and via pip. With it you can load experiments as Python objects. The artifacts will be available as objects that you can again save on disk or display in a Jupyter notebook
from incense import ExperimentLoader
loader = ExperimentLoader(db_name="my_db")
exp = loader.find_by_id(1)
print(exp.artifacts)
exp.artifacts["my_artifact"].save() # Save artifact on disk.
exp.artifacts["my_artifact"].render() # Display artifact in notebook.
How to import ranker training data using the Retrieve and Rank dashboard feature in Bluemix?
I have followed the following steps:
Imported Documents (successful)
Imported Questions (successful)
Import training data (failed: I get the following message "All exported data has already been imported into the system. There are no updates required.") But no ranker is imported.
The error "All exported data has already been imported into the system. There are no updates required." is triggered if the training data does not match the uploaded questions and documents. Hence, it is very important to make sure that the complete collections of documents and questions are imported first.
We have three Organization tenents, Dev, Test and Live. All hosted on premise (CRM 2011. [5.0.9690.4376] [DB 5.0.9690.4376]).
Because the way dialogs uses GUIDs to refference record in Lookup, we aim to maintain GUIDs for static records same across all three tenents.
While all other entities are working fine, I am failing to import USERS and also maintain their GUIDS. I am using Export/Import to get the data from Master tenent (Dev) in to the Test and Live tenents. It is very similar to what 'configuration migration tool' does in CRM 2013.
Issue I am facing is that in all other entities I can see the Guid field and hence I map it during the import wizard but no such field shows up in SystemUser entity while running import wizards. For example, with Account, I will export a Account, amend CSV file and import it in the target tenant. When I do this, I map AccountId (from target) to the Account of source and as a result this account's AccountId will be same both in source and target.
At this point, I am about to give up trying but that will cause all dialogs that uses User lookup will fail.
Thank you for your help,
Try following steps. I would strongly recommend to try this on a old out of use tenant before trying it on live system. I am not sure if this is supported by MS but it works for me. (Another thing, you will have to manually assign BU and Roles following import)
Create advance find. Include all required fields for the SystemUser record. Add criteria that selects list of users you would like to move across.
Export
Save file as CSV (this will show the first few hidden columns in excel)
Rename the Primary Key field (in this case User) and remove all other fields with Do Not Modify.
Import file and map this User column (with GUID) to the User from CRM
Import file and check GUIDs in both tenants.
Good luck.
My only suggestion is that you could try to write a small console application that connects to both your source and destination organisations.
Using that you can duplicate the user records from the source to the destination preserving the IDs in the process
I can't say 100% it'll work but I can't immediately think of a reason why it wouldn't. This is assuming all of the users you're copying over don't already existing in your target environments
I prefer to resolve these issues by creating custom workflow activities. For example; you could create a custom workflow activity that returns a user record by an input domain name as a string.
This means your dialogs contain only shared configuration values, e.g. mydomain\james.wood which are used to dynamically find the record you need. Your dialog is then linked to a specific record, but without having the encode the source guid.
Same situation like this question, but with current DerbyJS (version 0.6):
Using imported docs from MongoDB in DerbyJS
I have a MongoDB collection with data that was not saved through my
Derby app. I want to query against that and pull it into my Derby app.
Is this still possible?
The accepted answer there links to a dead link. The newest working link would be this: https://github.com/derbyjs/racer/blob/0.3/lib/descriptor/query/README.md
Which refers to the 0.3 branch for Racer (current master version is 0.6).
What I tried
Searching the internets
The naïve way:
var query = model.query('projects-legacy', { public: true });
model.fetch(query, function() {
query.ref('_page.projects');
})
(doesn't work)
A utility was written for this purpose: https://github.com/share/igor
You may need to modify it to only run against a single collection instead of the whole database, but it essentially goes through every document in the database and modifies it with the necessary livedb metadata and creates a default operation for it as well.
In livedb every collection has a corresponding operations collection, for example profiles will have a profiles_ops collection which holds all the operations for the profiles.
You will have to convert the collection to use it with Racer/livedb because of the metadata on the document itself.
An alternative if you dont want to convert is to use traditional AJAX/REST to get the data from your mongo database and then just put it in your local model. This will not be real-time or synced to the server but it will allow you to drive your templates from data that you dont want to convert for some reason.