I have converted the .tiff file into ascii format with the help of ArcGIS, now i want to open that same file in WEKA, and it is asking me to open file in .arff format which i am clueless on how to convert ascii file into that, as format for ascii file is .TXT.
It's difficult to see the issue without some sample data or error message, but it appears that the file can't be read into Weka in its current state.
You could try formatting the dataset to comply with the Attribute-Relation File Format.
Failing this, you could also format the dataset into a Comma-Delimited File Format with header information on the first row, and data underneath. CSV Files are accepted into Weka quite fine.
Hope this Helps!
Considering that you are working with satellite imagery and that you know R, you could try something like this:
library(raster)
library(foreign)
library(RWeka)
dir.satellite <- '../tiffs' # Folder with your satellite TIF files
# Read them from their full paths
bands <- list.files(file.path(dir.satellite), full.names = T,
pattern = '.TIF$')
stkTIF <- raster::stack(bands) # group them into a rasterStack object
# Write the WEKA arff file
write.arff(as.matrix(stkTIF),
file = file.path(dir.satellite, 'your_file_name.arff'))
Related
I have a group of .dat files I need to convert to .txt files. I have a directory called "data" that has "210" files (0.dat, 1.dat, ......210.dat), I want to convert these .dat files to .txt files (0.txt, 1.txt ......210.txt), the data type is 16bit integer.
Ideally you are supposed to try a few steps before coming here asking for a solution. Since this is your first post, I'll give you a few pointers. Next time please Google a few solutions. Try them. Post your code/error if you have any issues to receive help.
From the directory, Load all the contents of the folder using,
files = dir('*.dat');
Add a FOR loop to read each dat file one by one.
fileID = fopen('XXXX.dat');
OneByte = fread(fileID,'*uint16');
Then once the file is loaded you can convert it into .txt file.
I want to open a .dat file from INCA in CANoe but for that I need to convert it into a logging file format: ascii, blf, mdf4.... does anybody know how to do it? I can't find anything on the internet.
There should be an option in ETAS menu bar.
Check under Utilities > Measure Data Converter.
You can select the file required to be converted and also which format you need to convert to.
How can I compare two minified json files in beyond compare? Is there a built in file format for json? I'm looking to compare two pretty print representations of the underlying json objects.
In this thread a representative says:
While not in the box yet, we do have a JSON sorted format available for download in our Additional File Formats section:
With a link to Scooter Software Downloads
You can achieve this specialized diff functionality by defining a new file format conversion rule in beyond compare. This example was conducted in the Windows OS.
Step 0: Create a python conversion script to render the formatted json. Save the following python script somewhere on your harddrive
import json
import sys
sourceFile = sys.argv[1]
targetFile = sys.argv[2]
with open(sourceFile, 'r') as file_r:
# Load json data
data = json.load(file_r)
# Write formatted json data
with open(targetFile, 'w') as file_w:
json.dump(data, file_w, indent=4)
Step 1: Navigate in the BeyondCompare menu to: Tools-->File Formats...
Step 2: Create new file format entry by clicking on the + button and select Text Format
Step 3: Enter *.json into the file format's Mask field, and any description that will help you recall the file format's purpose.
Step 4: Define the file format's conversion settings. Select the Conversion tab and select External program (unicode filenames) from the pull down.
In the Loading field write the following shell command
python C:\Source\jsonPrettyPrint.py "%s" "%t"
Step 5: Press the Save button and optionally rename the file format by right clicking it in the File Formats Name and Mask table.
Further specializations of the json dumping could be considered by looking at the python documentation, eg sort_keys=True
I need to convert my SFF file to PDF, then i need verify the document. i.e SFF file and converted file.
For that, I think to convert SFF file to image file and PDF file to image file.
Then comparing the both file using image processing.
To do this method:
Im searching for a program to convert SFF to BMP
Does anyone know such a program or has another idea how to do the job?
Thank you in advance...
Looks like you need reaConvertor. It appears to be a matured tool you can rely on. There is an online version of the tool here
I think:
https://github.com/Sonderstorch/sfftools
will do what you need (convert sff -> tiff/jpeg/..) and then you can use imageMagic (for example) to go to PDF.
Clearly not a current well used image format, however if you have legacy.sff Structured Fax Format, they are similar (not exactly identical) to a Monochrome G4 format.
By far the simplest programmable method to convert is using IrfanView which can Read Modify and Resave as other formats in batches.
Out put can be any other modern image type including Mono.BMP, G4.fax or as PDF (with or without GhostScript)
I am trying to read data from text file (which is output given by Tesseract OCR) and save the same in excel file. The problem i am facing here is the text files are in space separated format, and there are multiple files. Now i need to read all the files and save the same in excel sheet.
I am using MATLAB to import and export data. I even thought of using python to convert the files into CSV format so that i can easily import the same in MATLAB and simply excelwrite the same. But no good solution.
Any guidance would be of great help.
thank you
To read a text file in Matlab you can use fscanf or textscan then to export to excel you can use xlswrite that write directly to the excel file.