How to map JPA objects to DB as well as file system - jpa

I have a JPA object which is mapped to table in DB and to all the columns. Out of those columns, one column stores the location of a file on file system.
So Whenever I retrieve this data, is it also possible to get the file contents?
Ex: User table is mapped to User object and that User table has column by name picture which stores the url of that file. Whenever I retrieve the Users, can I also retrieve the file contents? If so how does my mapping look like?

No, it's not possible. You'll have to do it by yourself, or store the contents in the file in the database, as a blob, instead of storing it on the file system.

Related

Azure Data factory - How to read data from text files and dump into respective tables based on content in the file

I have 500+ text files with content present in similar format. e.g.
Employee_3_1
Id                          1
Name                    Sam
Email                     Sam#gmail.com
Employee_3_1
EmployeeAddress_3_1
Street                     Stanley street
City                         Huston
Zip                          060000
State                       TX
EmployeeAddress_3_1
And similarly multiple data blocks are present in text file
I wanted to know using Azure Data factory if it is possible to read this data and then copy it to resp. tables like Employee, EmployeeAddress, EmployeeSalary etc.

Azure Data Factory V2 Copy Activity - Save List of All Copied Files

I have pipelines that copy files from on-premises to different sinks, such as on-premises and SFTP.
I would like to save a list of all files that were copied in each run for reporting.
I tried using Get Metadata and For Each, but not sure how to save the output to a flat file or even a database table.
Alternatively, is it possible to fine the list of object that are copied somewhere in the Data Factory logs?
Thank you
Update:
Items:#activity('Get Metadata1').output.childItems
If you want record the source file names, yes we can. As you said we need to use Get Metadata and For Each activity.
I've created a test to save the source file names of the Copy activity into a SQL table.
As we all know, we can get the file list via Child items in Get metadata activity.
The dataset of Get Metadata1 activity specify the container which contains several files.
The list of file in test container is as follows:
At inside of the ForEach activity, we can traverse this array. I set a Copy activity named Copy-Files to copy files from source to destnation.
#item().name represents every file in the test container. I key in the dynamic content #item().name to specify the file name. Then it will sequentially pass the file names in the test container. This is to execute the copy task in batches, each batch will pass in a file name to be copied. So that we can record each file name into the database table later.
Then I set another Copy activity to save the file names into a SQL table. Here I'm using Azure SQL and I've created a simple table.
create table dbo.File_Names(
Copy_File_Name varchar(max)
);
As this post also said, we can use similar syntax select '#{item().name}' as Copy_File_Name to access some activity datas in ADF. Note: the alias name should be the same as the column name in SQL table.
Then we can sink the file names into the SQL table.
Select the table which created previously.
After I run debug, I can see all the file names are saved into the table.
If you want add more infomation, you can reference the post I maintioned previously.

How to copy data from an a csv to Azure SQL Server table?

I have a dataset based on a csv file. This exposes a data as follows:
Name,Age
John,23
I have an Azure SQL Server instance with a table named: [People]
This has columns
Name, Age
I am using the Copy Data task activity and trying to copy data from the csv data set into the azure table.
There is no option to indicate the table name as a source. Instead I have a space to input a Stored Procedure name?
How does this work? Where do I put the target table name in the image below?
You should DEFINITELY have a table name to write to. If you don't have a table, something is wrong with your setup. Anyway, make sure you have a table to write to; make sure the field names in your table match the fields in the CSV file. Then, follow the steps outlined in the description below. There are several steps to click through, but all are pretty intuitive, so just follow the instructions step by step and you should be fine.
http://normalian.hatenablog.com/entry/2017/09/04/233320
You can add records into the SQL Database table directly without stored procedures, by configuring the table value on the Sink Dataset rather than the Copy Activity which is what is happening.
Have a look at the below screenshot which shows the Table field within my dataset.

Retrieve blob file name in Copy Data activity

I download json files from a web API and store them in blob storage using a Copy Data activity and binary copy. Next I would like to use another Copy Data activity to extract a value from each json file in the blob container and store the value together with its ID in a database. The ID is part of the filename, but is there some way to extract the filename?
You can do the following set of activities:
1) A GetMetadata activity, configure a dataset pointing to the blob folder, and add the Child Items in the Field List.
2) A forEach activity that takes every item from the GetMetadata activity and iterates over them. To do this you configure the Items to be #activity('NameOfGetMetadataActivity').output.childItems
3) Inside the foreach, you can extract the filename of each file using the following function: item().name
After this continue as you see fit, either adding functions to get the ID or copy the entire name.
Hope this helped!
After Setting up Dataset for source file/file path with wildcard and destination/sink as some table
Add Copy Activity setup source, sink
Add Additional Columns
Provide a name to the additional column and value "$$FILEPATH"
Import Mapping and voila - your additional column should be in the list of source columns marked "Additional"

Where does Odoo 9 physically store the `image` field of `res.partner` records in the database?

I can't find the image column in res_partner table in an Odoo 9 PostgreSQL database? Where does Odoo 9 store this image field?
As of Odoo 9, many binary fields have been modified to be stored inside the ir.attachment model (ir_attachment table). This was done in order to benefit from the filesystem storage (and deduplication properties) and avoid bloating the database.
This is enabled on binary fields with the attachment=True parameter, as it is done for res.partner's image fields.
When active, the get() and set() method of the binary fields will store and retrieve the value in the ir.attachment table. If you look at the code, you will see that the attachments use the following values to establish the link to the original record:
name: name of the binary field, e.g. image
res_field: name of the binary field, e.g. image
res_model: model containing the field, e.g. res.partner
res_id: ID of the record the binary field belongs to
type: 'binary'
datas: virtual field with the contents of the binary field, which is actually stored on disk
So if you'd like to retrieve the ir.attachment record holding the value of the image of res.partner with ID 32, you could use the following SQL:
SELECT id, store_fname FROM ir_attachment
WHERE res_model = 'res.partner' AND res_field = 'image' AND res_id = 32;
Because ir_attachment entries use the filesystem storage by default, the actual value of the store_fname field will give you the path to the image file inside your Odoo filestore, in the form 'ab/abcdef0123456789' where the abc... value is the SHA-1 hash of the file. This is how Odoo implements de-duplication of attachments: several attachments with the same file will map to the same unique file on disk.
If you'd like to modify the value of the image field programmatically, it is strongly recommended to use the ORM API (e.g. the write() method), to avoid creating inconsistencies or having to manually re-implement the file storage system.
References
Here is the original 9.0 commit that introduces the feature for storing binary fields as attachments
And the 9.0 commit that converts the image field of res.partner to use it.