I was trying to index the blob content to Azure search. I added blob content to the search index through blob indexer.
I am using MongoDB to store the uploaded file information along with blob path. We have to add some tags to the file which were stored in MongoDB. Now I want to add these tags into Azure search for that file along with file content.
The problem I am facing is,
Problem 1: To maintain the uniqueness(search key field) between MongoDB record and blob indexer. Initially, I want to use the metadata_storage_path from blob indexer and the base64 encoded blob path which I was stored in MongoDB. But the problem is it never matches the metadata_storage_path and base64 encoded blob path from my node.js.
Problem 2: TO solve the Problem 1, I came into another approach to store my MongoDB file id(FID) as a custom metadata field to the blob to get the uniqueness(search key field) for search index and mongoDB record. The problem here how can I map the custom metadata field to key field? I am not able to index the blob custom metadata fields.
In both the scenarios I am not able to achieve the expected results. How can I achieve the search index key field between MongoDB and Azure blob?
You can use base64 encoded blob path as the document key, which you can get in both indexers by using base64 field mapping. Check https://learn.microsoft.com/en-us/azure/search/search-indexer-field-mappings#base64EncodeFunction for all the options to match your node.js encoding function.
Related
I'm new to Mongodb. I'm trying to export a column content from MongoDB collection into flat files and store them in Azure blob and replace the column content with path for exported files.
Originally, pdf files were stored in a column in the collection, but now the decision was made to export the column contents back into pdf files and reference the files by location instead in the same column.
Hope this makes sense.
Thank you
I download json files from a web API and store them in blob storage using a Copy Data activity and binary copy. Next I would like to use another Copy Data activity to extract a value from each json file in the blob container and store the value together with its ID in a database. The ID is part of the filename, but is there some way to extract the filename?
You can do the following set of activities:
1) A GetMetadata activity, configure a dataset pointing to the blob folder, and add the Child Items in the Field List.
2) A forEach activity that takes every item from the GetMetadata activity and iterates over them. To do this you configure the Items to be #activity('NameOfGetMetadataActivity').output.childItems
3) Inside the foreach, you can extract the filename of each file using the following function: item().name
After this continue as you see fit, either adding functions to get the ID or copy the entire name.
Hope this helped!
After Setting up Dataset for source file/file path with wildcard and destination/sink as some table
Add Copy Activity setup source, sink
Add Additional Columns
Provide a name to the additional column and value "$$FILEPATH"
Import Mapping and voila - your additional column should be in the list of source columns marked "Additional"
I have a mongoDB collection HeaderDetail with column names headerName & metricType. Below attached my collection detail,
Now, i was tried to store the headerName values as header in another collection using talend pivot component. But it is storing as a column values only.
Expected:
How to store the one collection column values as a headers in another collection?
It's not possible in general: component's schema is configured before it started and can't be modified during runtime.
But you can do it using Dynamic schema (not available in free version of studio) and add columns using tJavaRow components.
I have a field in Mongodb that contains large amounts of user data.
I need to apply SHA256 or SHA512 to the contents of the field while keeping the original.
I need another field to be generated with the SHA values of the original field.
I would like to automate this task, does anybody have any suggestions?
I can't find the image column in res_partner table in an Odoo 9 PostgreSQL database? Where does Odoo 9 store this image field?
As of Odoo 9, many binary fields have been modified to be stored inside the ir.attachment model (ir_attachment table). This was done in order to benefit from the filesystem storage (and deduplication properties) and avoid bloating the database.
This is enabled on binary fields with the attachment=True parameter, as it is done for res.partner's image fields.
When active, the get() and set() method of the binary fields will store and retrieve the value in the ir.attachment table. If you look at the code, you will see that the attachments use the following values to establish the link to the original record:
name: name of the binary field, e.g. image
res_field: name of the binary field, e.g. image
res_model: model containing the field, e.g. res.partner
res_id: ID of the record the binary field belongs to
type: 'binary'
datas: virtual field with the contents of the binary field, which is actually stored on disk
So if you'd like to retrieve the ir.attachment record holding the value of the image of res.partner with ID 32, you could use the following SQL:
SELECT id, store_fname FROM ir_attachment
WHERE res_model = 'res.partner' AND res_field = 'image' AND res_id = 32;
Because ir_attachment entries use the filesystem storage by default, the actual value of the store_fname field will give you the path to the image file inside your Odoo filestore, in the form 'ab/abcdef0123456789' where the abc... value is the SHA-1 hash of the file. This is how Odoo implements de-duplication of attachments: several attachments with the same file will map to the same unique file on disk.
If you'd like to modify the value of the image field programmatically, it is strongly recommended to use the ORM API (e.g. the write() method), to avoid creating inconsistencies or having to manually re-implement the file storage system.
References
Here is the original 9.0 commit that introduces the feature for storing binary fields as attachments
And the 9.0 commit that converts the image field of res.partner to use it.