ask_question MC16_Phase2 : 3156 occurences (100.00%) : module abc_testbench/abc_top_0/abc**
This statement is in a file. There are multiple entries of this statement and other stuff is also present. I need to read it from there and put it in another file in the following manner:
3156 abc_testbench/abc_top_0/abc**
Fixed entities in that statement are:
ask_question
occurences
module
could you please more elaborate the statement. i am new in perl, could you please make me understand the whole scenario from the very beginning starting from reading the file to grabbing the things in the given manner. Thanks Ray Toal.
You will want a regex with two capturing groups. Based on the information given, the regex would be:
/ask_question[^:]*:\s*(\d+)\s*occurences[^:*]:\s*module\s*([^*]*\*\*)/
Apply this regex throughout the input, and write the captures, separated by a space, to your output file.
Related
I have been working on a product code to resolve an issue but am stuck on a line of code
Can anyone help me understand what exactly does this command do?
perl -MText::CSV -lne 'BEGIN{$p = Text::CSV->new()} print join "|", $p->fields() if $p->parse($_)' /home/daily/${FULL_FILENAME} > /home/output.txt
I think its to copy the file to my home location with some transformations but not sure exactly
This is a slightly broken program that translates a comma-separated values (CSV) file to a pipe-separated values file.
The particular command-line switches are documented in perlrun. This is a "one-liner", so you can read about those to see what's going on there.
The Text::CSV module deals with CSV files, and the program is parsing a line from the file and re-outputting as a pipe-separated file.
But, this program deals with each line as a complete record. That might be fine for you, but at some point you might end up with a literal value that has a newline in it, like a,"b\nc",d. Now reading line-by-line breaks the program since the quotes appear to be unclosed within the first line. Note only that, it blindly concatenates the parsed fields without considering if any of the fields should be quoted. It might be unlikely that a pipe character would be in the data, but the problem isn't it's rarity but the consequences and costliness when it does show up.
The rewrite.pl example script in the related module Text::CSV_XS is a tool that could replace this one-liner. It properly reads the input and knows how to properly translate it.
I'm a relatively new SAS user, so please bear with me!
I have 63 folders that each contain a uniquely named xls file, all containing the same variables in the same order. I need to concatenate them into a single file. I would post some of the code I've tried but, trust me, it's all gone horribly awry and is totally useless. Below is the basic library structure in a libname statement, though:
`libname JC 'W:\JCs\JC Analyses 2016-2017\JC Data 2016-2017\2 - Received from JCs\&jcname.\2016_&jcname..xls`
(there are 63 unique &jcname values)
Any ideas?
Thanks in advance!!!
This is a common requirement, but it requires a fairly uncommon knowledge of multiple SAS functions to execute well.
I like to approach this problem with a two step solution:
Get a list of filenames
Process each filename in a loop
While you can process each filename as you read it, it's a lot easier to debug and maintain code that separates these steps.
Step 1: Read filenames
I think the best way to get a list of filenames is to use dread() to read
directory entries into a dataset as follows:
filename myfiles 'c:\myfolder';
data filenames (keep=filename);
dir = dopen('myfiles');
do file = 1 to dnum(dir);
filename = dread(dir,file);
output;
end;
rc = dclose(dir);
run;
After this step you can verify that the correct filenames have been read be printing the dataset. You could also modify the code to only output certain types of files. I leave this as an exercise for the reader.
Step 2: use the files
Given a list of names in a dataset, I prefer to use call execute() inside a data step to process each file.
data _null_;
set filenames;
call execute('%import('||filename||')');
run;
I haven't included a macro to read in the Excel files and concatenate the dataset (partly because I don't have a suitable list of Excel files to test, but also because it's a situational problem). The stub macro below just outputs the filenames to the log, to verify that it's running:
%macro import(filename);
/* This is a dummy macro. Here is where you would do something with the file */
%put &filename;
%mend;
Notes:
Arguably there are many are many examples of how to do this in multiple places on the web, e.g.:
this SAS knowledge base article (http://support.sas.com/kb/41/880.html)
or this paper from SUGI,
However, most of them rely on the use of pipe to run a dir or ls command, which I feel is the wrong approach because it's platform dependent and in many modern environments the ability to pipe shell commands will be disabled.
I based this on an answer by Daniel Santos in communities.sas.com, but, given the superior functionality of stackoverflow I'd much rather see a good answer here.
I'm having trouble with loading .txt file in Matlab. The main problem is having not equal rows. I'll attach the file so you can more clearly see what I'm truing to say. First, the file has information about each node in graph. One row has information like this:
1|1|EL_1_BaDfG|4,41|5,1|6,99|8,76|9,27|13,88|14,19|15,91|19,4|21,48...
it means:
id|type|name|connected_to, weight|connected_to, weight| and so on..
I was trying to use fscanf function, but it only reads whole line as one string. How I suppose to divide it into struct with information that I need?
Best regards,
Dejan
Here, you can see file that I'm trying to load
An alternative to Stewie answer is to use:
fgetl to read each line
Then use
strread (or textscan) to split the string
Firstly using the | delimiter - then on the sub section(s) containing , do it a second time.
I want to make diff between two files which contains lines beginning with "line_$NR". I want to make diff between the files making abstraction of the presence of "lines_$NR" but when the differences are printed I want lines_$NR to be displayed.
It is possible to do that?
Thanks.
I believe in this case, you have to preprocess your iput files to remove /^line_[0-9]*/, diff the resulting files, then recombine the diff output with the removed words according to line numbers in diff output.
Python's difflib should be very handy here, or same from perl. If you want to stick to shell, I suppose you could get by with awk.
If you don't need exact output, perhaps you can use diff's --line-format=... directive to inject actual line number in a diff, rather than the word you removed in preprocessing step.
I have a large .csv file (~26000 rows). I want to be able to read it into matlab. Another problem is that it contains a collection of strings delimited by commas in one of the fields.
I'm having trouble reading it. I tried stuff like tdfread, which won't work here. Any tricks with textscan i should be aware about?
Is there any other way?
I'm not sure what is generating your CSV file but that is your problem.
The point of a CSV file, is that the file itself designates separation of fields. If the text of the CSV contains commas, then nothing you can do will help you. How would ANY program know when the text in a single field contains commas, or when that comma is a field delimiter?
Proper CSV would have a text qualifier. Some generators/readers gives you the option to use one. The standard text qualifier is a " (quote). Its changeable, though, because your text may contain those, too.
Again, its all about generating proper CSV content.
There's a chance that xlsread won't give you the answer you expect -- do the strings always appear in the same columns, for example? I think (as everyone else seems to :-) that it would be more robust to just use
fid = fopen('yourfile.csv');
and then either textscan
t = textscan(fid, '%s', delimiter', sprintf('\n'));
t = t{1};
or just fgetl (the example in the help is perfect).
After that you can do some line-by-line processing -- using textscan again on the text content of each line, for example, is a nice, quick way to get a cell-array that will allow fast analysis of each line.
You have a problem because you're reading it in as a .csv, and you have commas within your data. You can get it in Excel and manipulate the date, possibly extract the unwanted commas with Excel formulas. I work with .csv files for DB imports quite a bit. I imagine matLab has similar rules, which is - no commas in your data.
Can you tell us more about your data? Are there commas throughout, our just one column? Maybe you can read it in as tab delimited?
Are you using a Unix system? The reason I am asking is that you could use a command-line function such as sed and regular expressions to clean those data files before you pass them into Matlab. Here is a link that explains how to do exactly what you are looking for.
Since, as others have observed, your file is CSV with commas inside what you think of as a single field, it's going to be hard to persuade Matlab that that really is only one field. I think your best strategy is going to be to read one line at a time, into a string acting as a buffer, and to translate it, field-by-field, into the variables or other data structures that you want. Since Matlab has in-built regular expression capabilities this shouldn't be too hard.
And, as others have already suggested, posting a sample of your data would help us to help you.
One easy solution is:
path='C:\folder1\folder2\';
data = 'data.csv';
data = dataset('xlsfile',sprintf('%s\%s', path,data));
Of course you could also do the following:
[data,path] = uigetfile('C:\folder1\folder2\*.csv');
data = dataset('xlsfile',sprintf('%s\%s', path,data));
now you will have loaded the data as dataset. An easy way to get a column 1 for example is
double(data(1))