In DSpace 6.3, when I try to export metadata in a particular collection where .csv file will be downloaded in that file how could I add a custom field or column of fulltextpdf column or Name of the file column where can I modify to get this column.
I have attached the .csv file, So please guide me to solution to add a custom column in the metadata.
When I export the metadata so that I could get all the fields including added column.
Thanks & Regards
Sai Kumar S
I have attached the .csv file, So please guide me to solution to add a custom column in the metadata.
When I export the metadata so that I could get all the fields including added column.
[enter image description here][1]
[1]: https://i.stack.imgur.com/KLlne.png
Related
The overall aim of the pipeline is to copy from XML to Oracle.
One of the source columns is a datetime that needs formatting, so I'm using an intermediate copy activity to copy from XML to CSV as instructed in this answer
From the CSV to the table is simple mapping except for the need for an additional target column with a fixed value of '365Response'
I've tried adding this as an additional column as shown below:
However, on the mapping tab, I'm not able to select the new additional column:
What did I do wrong?
Your process to add an Additional column in the copy activity looks correct. If the Additional column is not showing in the mapping, you can clear the mapping and import schema again to refresh the mapping.
I am trying to do schema compliance of an input file in ADF. I have tried the below.
Get Metadata Activity
The schema validation that is available in source activity
But the above seems to only check if a particular field is present or not in the specified position. Also Azure by default takes the datatype of all these fields as string since the input is flat file.
I want to check the position and datatype as well. for eg:-
empid,name,salary
1,abc,10
2,def,20
3,ghi,50
xyz,jkl,10
The row with empid as xyz needs to be rejected as it is not of number data type. Any help is appreciated.
You can use data flow and create a filter to achieve this.
Below is my test:
1.create a source
2.create a filter and use this expression:regexMatch(empid,'(\\d+)')
3.Output:
Hope this can help you.
I am coping data from a rest api to an azure SQL database. The copy is working find but there is a column which isn't being return within the api.
What I want to do is to add this column to the source. I've got a variable called symbol which I want to use as the source column. However, this isn't working:
Mapping
Any ideas?
This functionality is available using the "Additional Columns" feature of the Copy Activity.
If you navigate to the "Source" area, the bottom of the page will show you an area where you can add Additional Columns. Clicking the "New" button will let you enter a name and a value (which can be dynamic), which will be added to the output.
Source(s):
https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#add-additional-columns-during-copy
Per my knowledge, the copy activity may can't meet your requirements.Please see the error conditions in the link:
Source data store query result does not have a column name that is
specified in the input dataset "structure" section.
Sink data store (if with pre-defined schema) does not have a column
name that is specified in the output dataset "structure" section.
Either fewer columns or more columns in the "structure" of sink
dataset than specified in the mapping.
Duplicate mapping.
I think Mapping Data Flow is your choice.You could add a derived column before the sink dataset and create a parameter named Symbol.
Then set the derived column as the value of Symbol.
You can use the Copy Activity with a stored proc sink to do that. See my answer here for more info.
I've downloaded multiple metro extracts from openstreetmap as PBF files when i try to import them with osm2pgsql it works for the first and creates the tables. I then want to add a column in the planet_osm_ways with a cityID to know which "way id" belonged to which city after i then try to import another city it says 'ERROR: Missing data for column "city_id". is there a way to modify the planet_osm_ways table without breaking the script? I really need to know which id belonged to which metro extract.
You need to edit the style file (default.style, possibly in osm2pgsql-bin directory) used by osm2pgsql.
You can then add the instruction
#Add custom column
node,way citiid int4 linear
The column will be created, and - provided no tag has this name - will not be populated. You are then free to populate it as you want.
I am trying to create a crystal report, but I want the connection string that I have mentioned in the Web.Config file to be accessed. Also, I want to create an xsd file for my crystal report file to refer. Can someone please direct me to a tutorial or forum to solve my problem?
Thank you all for helping me out.
I am now able to display data in the crystal report file using the XSD file.
The xsd file has fields which are exactly the same name as mentioned in the select query that is bringing values from DB.
The following is the part of the xsd file that contains the field names in the xs:element tag.
The datatypes of these fields , also needs to be mentioned. The XSD file name can be kept as desired.
The name "Summary_Report_on_portal" and "Summary_Updt" does not correspond to any dataset name in code behind or query fields. It can be kept as per the user's wish.
This XSD file needs to be referred into the crystal report, using the Database Expert in the Field Explorer window. The "Summary_Updt" name is visible in the new connections , which can be added to the crystal report.
The fields mentioned in the xs:element field are visible for the user to drag and drop into the crystal report.
When the user mentions the data source to the crystal report(dataset), the fields in the dataset are matched with the XSD field values.
CODE:
objBL.Rpt.SetDataSource(objBL.ds_shipment_info.Tables[0]);
Hope this is detailed enough. Let me know if anyone wants more info
you can also programatically set the report data source if you need to - I can provide details if you require
between calling myReportDocument.Load("myreport.rpt") and myReportDocument.Refresh() (the latter of which actually gets the data from the database) it is possible to add a call to myReportDocument.SetDataSource(myDataSource) which takes an object of data source type, which you can create, with a call to its constructors, with the URL of the data source you wish to use, its username and password.
Hope this helps
a different way would be to call ReportDocument::SetDatabaseLogon (String * user, String * password, String * server, String * database); before Refresh(); if you don't want to reuse a connection. This has the benefit of being simple but means that you dont reuse data sources.