Open DBF Table -> create layer ESRI arcobjects - arcobjects

I have successfully opened a DBF Table:
String dbfDirectoryPath = "Z:/ESRI/data/washingtonCountyDataFiles/tlg_roads";
IWorkspaceFactory workspaceFactory = new ShapefileWorkspaceFactory();
IWorkspace workspace = workspaceFactory.OpenFromFile(dbfDirectoryPath, 0);
IFeatureWorkspace featureWorkspace = workspace as IFeatureWorkspace;
String dbfTable = "tlg_roads_l.dbf";
ITable table = featureWorkspace.OpenTable(dbfTable);
Now I want to map it, which I think entails a call to mapControl.AddLayer(layer). So I need to convert this object from a featureWorkspace to a ILayer somehow.
It looks like I can just CreateFeatureClass and then cast to an ILayer, but there are 6 arguments for CreateFeatureClass including CLSID so I get the impression I am missing some conceptual points. Thanks for any advice : )
//IFeatureClass featureclass = tableWorkspace.CreateFeatureClass //req six args, incl. CLSID
ILayer layer = featureclass as ILayer;
mapControl.AddLayer(layer);

Apparently DBF files are not for viewing and only shape files are for rendering and apparently somehow the DBF contains data necessary for the shapefiles. This is what I was told anyway feel free to enlighten me.

Related

QgsField won't accept parameter typeName

I'm trying to create new vector layer with the same fields as contained in original layer.
original_layer_fields_list = original_layer.fields().toList()
new_layer = QgsVectorLayer("Point", "new_layer", "memory")
pr = new_layer.dataProvider()
However, when I try:
for fld in original_layer_fields_list:
type_name = fld.typeName()
pr.addAttributes([QgsField(name = fld.name(), typeName = type_name)])
new_layer.updateFields()
QgsProject.instance().addMapLayer(new_layer)
I get a layer with no fields in attribute table.
If I try something like:
for fld in original_layer_fields_list:
if fld.type() == 2:
pr.addAttributes([QgsField(name = fld.name(), type = QVariant.Int)])
new_layer.updateFields()
QgsProject.instance().addMapLayer(new_layer)
... it works like charm.
Anyway ... I'd rather like the first solution to work in case if one wants to automate the process and not check for every field type and then find an appropriate code. Besides - I really am not able to find any documentation about codes for data types. I managed to find this post https://gis.stackexchange.com/questions/353975/get-only-fields-with-datatype-int-in-pyqgis where in comments Kadir pointed on this sourcecode (https://codebrowser.dev/qt5/qtbase/src/corelib/kernel/qvariant.h.html#QVariant::Type).
I'd really be thankful for any kind of direction.

simulink - GetSet Custom Storage Class

I am having a model which takes two input & multiplies them & give the output.
output_1 = input_1 * input_2
I have declared my simulink signals as CustomStorageClass= GetSet
input_1 = Simulink.Signal;
input_1.CoderInfo.StorageClass = 'Custom';
input_1.CoderInfo.CustomStorageClass = 'GetSet';
input_1.CoderInfo.CustomAttributes.GetFunction = 'Get_input_1';
input_1.CoderInfo.CustomAttributes.SetFunction = 'Set_input_1';
input_1.CoderInfo.CustomAttributes.HeaderFile = 'signals.h';
input_2 = Simulink.Signal;
input_2.CoderInfo.StorageClass = 'Custom';
input_2.CoderInfo.CustomStorageClass = 'GetSet';
input_2.CoderInfo.CustomAttributes.GetFunction = 'Get_input_2';
input_2.CoderInfo.CustomAttributes.SetFunction = 'Set_input_2';
input_2.CoderInfo.CustomAttributes.HeaderFile = 'signals.h';
output_1 = Simulink.Signal;
output_1.CoderInfo.StorageClass = 'Custom';
output_1.CoderInfo.CustomStorageClass = 'GetSet';
output_1.CoderInfo.CustomAttributes.GetFunction = 'Get_output_1';
output_1.CoderInfo.CustomAttributes.SetFunction = 'Set_output_1';
output_1.CoderInfo.CustomAttributes.HeaderFile = 'signals.h';
Now I am trying to convert my model to code using simulink coder.
In code generation setting of the model i have selected ert.tlc file in the system target file settings.
But the generated code does not have a Get_input_1() or Get_input_2() call like shown in this link.
http://www.mathworks.com/help/ecoder/ug/getset-custom-storage-classes.html
What i have missed in the setting. Please suggest
I know you probably already solved this issue, but I have also seen this behavior before.
Sometimes MATLAB does not update the header files correctly. If you had set a different configuration for your variable and then made a change involving the header files, I would recommend erasing the *_ert_rtw and slprj folders (they will appear again). It is similar to doing a "Make clean" operation, to ensure that everything is brand new.

How to specify strings in Weka file?

I am working on a text classification system and I would like to use unigrams as features. When building the arff file, I declared a string attribute field inside which I want to specify all the words contained in a message separated by comma. However, Weka is telling me that it "Cannot handle string attributtes". I tried defining the relation in the header with StringToWordVector, but it didn't help. How to go about this otherway? Many thanks!
if your arff file format is correct then the following code can help you
// dataSource: arff file (path of your arff file)
BufferedReader trainReader = new BufferedReader(new FileReader(dataSource));
trainInsts = new Instances(trainReader);
trainInsts.setClassIndex(trainInsts.numAttributes() - 1);
// the filter is used to convert the data from string to numeric
StringToWordVector STWfilter = new StringToWordVector();
FilteredClassifier model = new FilteredClassifier();
model.setFilter(STWfilter);
STWfilter.setInputFormat(trainInsts);
// the converted data
trainInsts = Filter.useFilter(trainInsts, STWfilter);

Getting line locations with iText

How can one find where are lines located in a document with iText?
Suppose say I have a table in a PDF document, and want to read its contents; I would like to find where exactly the cells are located. In order to do that I thought I might find the intersections of lines.
I think your only option using iText will be to parse the PDF tokens manually. Before doing that I would have a copy of the PDF spec handy.
(I'm a .Net guy so I use iTextSharp but other than some capitalization differences and property declarations they're almost 100% the same.)
You can get the individual tokens using the PRTokeniser object which you feed bytes into from calling getPageContent(pageNum) on your PdfReader.
//Get bytes for page 1
byte[] pageBytes = reader.getPageContent(1);
//Get the tokens for page 1
PRTokeniser tokeniser = new PRTokeniser(pageBytes);
Then just loop through the PRTokeniser:
PRTokeniser.TokType tokenType;
string tokenValue;
while (tokeniser.nextToken()) {
tokenType = tokeniser.tokenType;
tokenValue = tokeniser.stringValue;
//...check tokenValue, do something with it
}
As far a tokenValue, you'd want to probably look for re and l values for rectangle and line. If you see an re then you want to look at the previous 4 values and if you see an l then previous 2 values. This also means that you need to store each tokenValue in an array so you can look back later.
Depending on what you used to create the PDF with you might get some interesting results. For instance, I created a 4 cell table with Microsoft Word and saved as a PDF. For some reason there are two sets of 10 rectangles with many duplicates, but the general idea still works.
Below is C# code targeting iTextSharp 5.1.1.0. You should be able to convert it to Java and iText very easily, I noted the one line that has .Net-specific code that needs to be adjusted from a Generic List (List<string>) to a Java equivalent, probably an ArrayList. You'll also need to adjust some casing, .Net uses Object.Method() whereas Java uses Object.method(). Lastly, .Net accesses properties without gets and sets, so Object.Property is both the getter and setter compared to Java's Object.getProperty and Object.setProperty.
Hopefully this gets you started at least!
//Source file to read from
string sourceFile = "c:\\Hello.pdf";
//Bind a reader to our PDF
PdfReader reader = new PdfReader(sourceFile);
//Create our buffer for previous token values. For Java users, List<string> is a generic list, probably most similar to an ArrayList
List<string> buf = new List<string>();
//Get the raw bytes for the page
byte[] pageBytes = reader.GetPageContent(1);
//Get the raw tokens from the bytes
PRTokeniser tokeniser = new PRTokeniser(pageBytes);
//Create some variables to set later
PRTokeniser.TokType tokenType;
string tokenValue;
//Loop through each token
while (tokeniser.NextToken()) {
//Get the types and value
tokenType = tokeniser.TokenType;
tokenValue = tokeniser.StringValue;
//If the type is a numeric type
if (tokenType == PRTokeniser.TokType.NUMBER) {
//Store it in our buffer for later user
buf.Add(tokenValue);
//Otherwise we only care about raw commands which are categorized as "OTHER"
} else if (tokenType == PRTokeniser.TokType.OTHER) {
//Look for a rectangle token
if (tokenValue == "re") {
//Sanity check, make sure we have enough items in the buffer
if (buf.Count < 4) throw new Exception("Not enough elements in buffer for a rectangle");
//Read and convert the values
float x = float.Parse(buf[buf.Count - 4]);
float y = float.Parse(buf[buf.Count - 3]);
float w = float.Parse(buf[buf.Count - 2]);
float h = float.Parse(buf[buf.Count - 1]);
//..do something with them here
}
}
}

Custom clipboard data format accross RDC (.NET)

I'm trying to copy a custom object from a RDC window into host (my local) machine. It fails.
Here's the code that i'm using to 1) copy and 2) paste:
1) Remote (client running on Windows XP accessed via RDC):
//copy entry
IDataObject ido = new DataObject();
XmlSerializer x = new XmlSerializer(typeof(EntryForClipboard));
StringWriter sw = new StringWriter();
x.Serialize(sw, new EntryForClipboard(entry));
ido.SetData(typeof(EntryForClipboard).FullName, sw.ToString());
Clipboard.SetDataObject(ido, true);
2) Local (client running on local Windows XP x64 workstation):
//paste entry
IDataObject ido = Clipboard.GetDataObject();
DataFormats.Format cdf = DataFormats.GetFormat(typeof(EntryForClipboard).FullName);
if (ido.GetDataPresent(cdf.Name)) //<- this always returns false
{
//can never get here!
XmlSerializer x = new XmlSerializer(typeof(EntryForClipboard));
string xml = (string)ido.GetData(cdf.Name);
StringReader sr = new StringReader(xml);
EntryForClipboard data = (EntryForClipboard)x.Deserialize(sr);
}
It works perfectly on the same machine though.
Any hints?
There are a couple of things you could look into:
Are you sure the serialization of the object truely converts it into XML? Perhaps the outputted XML have references to your memory space? Try looking at the text of the XML to see.
If you really have a serialized XML version of the object, why not store the value as plain-vanilla text and not using typeof(EntryForClipboard) ? Something like:
XmlSerializer x = new XmlSerializer(typeof(EntryForClipboard));
StringWriter sw = new StringWriter();
x.Serialize(sw, new EntryForClipboard(entry));
Clipboard.SetText(sw.ToString(), TextDataFormat.UnicodeText);
And then, all you'd have to do in the client-program is check if the text in the clipboard can be de-serialized back into your object.
Ok, found what the issue was.
Custom format names get truncated to 16 characters when copying over RDC using custom format.
In the line
ido.SetData(typeof(EntryForClipboard).FullName, sw.ToString());
the format name was quite long.
When i was receiving the copied data on the host machine the formats available had my custom format, but truncated to 16 characters.
IDataObject ido = Clipboard.GetDataObject();
ido.GetFormats(); //used to see available formats.
So i just used a shorter format name:
//to copy
ido.SetData("MyFormat", sw.ToString());
...
//to paste
DataFormats.Format cdf = DataFormats.GetFormat("MyFormat");
if (ido.GetDataPresent(cdf.Name)) {
//this not works
...