Hello, how do I copy text and paste it into another line in notepad? - notepad

how do I copy text and paste it into another line in notepad?
need to copy the text between "0;" and ";"
then paste it in the desired location.
How do I do that?
tried using regular expressions, but could not.
example:
filename new.txt
Start
1- 0; Text 1 ;
2- line 1
3- line 1
4- line 3
5-
6- 0; Text 2 ;
7- line 1
8- line 1
9- line 3
10-
11- 0; Text 3 ;
12- line 1
13- line 1
14- line 3
end
change to:
filename new.txt
Start
1- 0; Text 1 ;
2- line 1
3- line 1
4- line 3 Text 1
5-
6- 0; Text 2 ;
7- line 1
8- line 1
9- line 3Text 2
10-
11- 0; Text 3 ;
12- line 1
13- line 1
14- line 3 Text 3
end

Not a question for Stack Overflow.
Select the text by clicking and dragging, making the desired portion blue.
Right-click the selection.
Click "Copy."
Move the cursor to the desired position, and right-click again.
Click "Paste."
Alternatively, you can press Ctrl and C to copy, and Ctrl and V to past.

Related

Daikon failing to run: "Error at line 1 in file example.dtrace: No declaration was provided for program point program.point:::POINT"

I am attempting to run Daikon on a .decls and .dtrace file I generated from a CSV file using an open-source perl script. The .decls and .dtrace file will be provided below. The daikon.jar file is held within a directory, which has a sub-directory "scripts" where I keep the .dtrace and .decls.
I am attempting to call daikon using the following command from within the directory containing the daikon.jar file:
java -cp daikon.jar daikon.Daikon scripts/example.dtrace scripts/example.decls
The program response is the following:
Daikon version 5.8.10, released November 1, 2021; http://plse.cs.washington.edu/daikon.
(read 1 decls file)
Processing trace data; reading 1 dtrace file:
Error at line 1 in file scripts/example.dtrace: No declaration was provided for program point program.point:::POINT
I am confused as to why it can't find the declarations file I provided which contains the declaration for the program.point function. Below I have provided the contents of both the example.dtrace and the example.decls files.
example.dtrace
program.point:::POINT
a
1
1
b
1
1
c
2
1
d
2
1
e
4
1
aprogram.point:::POINT
a
3
1
b
3
1
c
4
1
d
4
1
e
5
1
example.decls
DECLARE
aprogram.point:::POINT
a
double
double
1
b
double
double
1
c
double
double
1
d
double
double
1
e
double
double
1
Your example.decls file declares a program point named aprogram.point:::POINT, which starts with an a. Your example.dtrace file contains samples for a program point named program.point:::POINT, which does not start with an a.
So, the message is right: there is no declaration for a program point named program.point:::POINT, though there is a declaration for a program point named aprogram.point:::POINT.
Making the program point names consistent between the two files should resolve your problem. By adding the character a to the beginning of your example.dtrace file, I was able to get Daikon to produce output:
Daikon version 5.8.11, released November 2, 2021; http://plse.cs.washington.edu/daikon.
(read 1 decls file)
Processing trace data; reading 1 dtrace file:
[2021-11-17T10:13:50.284232]: Finished reading example.dtrace
===========================================================================
aprogram.point:::POINT
a == b
c == d
a one of { 1.0, 3.0 }
c one of { 2.0, 4.0 }
e one of { 4.0, 5.0 }
Exiting Daikon.

Matlab - read unstructured file

I'm quite new with Matlab and I've been searching, unsucessfully, for the following issue: I have an unstructure txt file, with several rows I don't need, but there are a number of rows inside that file that have an structured format. I've been researching how to "load" the file to edit it, but cannot find anything.
Since i don't know if I was clear, let me show you the content in the file:
8782 PROJCS["UTM-39",GEOGC.......
1 676135.67755473056 2673731.9365976951 -15 0
2 663999.99999999302 2717629.9999999981 -14.00231124135486 3
3 709999.99999999162 2707679.2185399458 -10 2
4 679972.20003752434 2674637.5679516452 0.070000000000000007 1
5 676124.87132483651 2674327.3183533219 -18.94794942571912 0
6 682614.20527054626 2671000.0000000549 -1.6383425512446661 0
...........
8780 682247.4593014461 2676571.1515358146 0.1541080392180566 0
8781 695426.98657108378 2698111.6168302582 -8.5039945992245904 0
8782 674723.80100125563 2675133.5486935056 -19.920312922947179 0
16997 3 21
1 2147 658 590
2 1855 2529 5623
.........
I'd appreciate if someone can just tell me if there is the possibility to open the file to later load only the rows starting with 1 to the one starting with 8782. First row and all the others are not important.
I know than manually copy and paste to a new file would be a solution, but I'd like to know about the possibility to read the file and edit it for other ideas I have.
Thanks!
% Now lines{i} is the string of the i'th line.
lines = strsplit(fileread('filename'), '\n')
% Now elements{i}{j} is the j'th field of the i'th line.
elements = arrayfun(#(x){strsplit(x{1}, ' ')}, lines)
% Remove the first row:
elements(1) = []
% Take the first several rows:
n_rows = 8782
elements = elements(1:n_rows)
Or if the number of rows you need to take is not fixed, you can replace the last two statements above by:
firsts = arrayfun(#(x)str2num(x{1}{1}), elements)
n_rows = find((firsts(2:end) - firsts(1:end-1)) ~= 1, 1, 'first')
elements = elements(1:n_rows)

Merge columns of data from multiple text files by row from each seperate file using Powershell

I have output from a numerical modelling code. I needed to extract a specific value from a series of files. I used the following code to get it (I derived this from an example that would extract IP addresses from logfiles):
$input_path = ‘C:\_TEST\Input_PC\out5.txt’
$output_file = ‘C:\_TEST\Output_PC_All\out5.txt’
$regex = ‘\bHEAD(.+)\s+[\-]*\d{1,3}\.\d{6,6}\s?\b’
select-string -Path $input_path -Pattern $regex -AllMatches | % { $_.Matches } | % { $_.Value } > $output_file
So I now have got a number of text files which contain measurements (the number of files may be variable, currently there are 50) with one column of numeric data (with a number of rows which currently equals 7302 but which may vary depending on the length of the time series modelled) and which may be positive or negative as per the example data below.
Note a semicolon preceding the text indicates that what follows is a comment I am using to explain the order of the dataset and does not appear in the data to be processed...
out1.txt
-1.000000 ; 1st line of out1.txt
2.000000 ; 2nd line of out1.txt
-3.000000 ; 3rd line of out1.txt
...
5.000000 ; nth line of out1.txt
out2.txt
-1.200000 ; 1st line of out2.txt
-2.200000 ; 2nd line of out2.txt
3.200000 ; 3rd line of out2.txt
...
-5.20000 ; nth line of out2.txt
outn.txt
1.300000 ; 1st line of outn.txt
-2.300000 ; 2nd line of outn.txt
-3.300000 ; 3rd line of outn.txt
...
10.300000 ; nth line of outn.txt
I need to merge them into a single text file (for this example lets call it "Combined_Output.txt") using Powershell with the data ordered so that the first row of values from the differing output files appear first, then repeat this for row 2 and so on as below:
Combined_Output.txt
-1.000000 ; 1st line of out1.txt
-1.200000 ; 1st line of out2.txt
1.300000 ; 1st line of outn.txt
2.000000 ; 2nd line of out1.txt
-2.200000 ; 2nd line of out2.txt
-2.300000 ; 2nd line of outn.txt
-3.000000 ; 3rd line of out1.txt
3.200000 ; 3rd line of out2.txt
-3.300000 ; 3rd line of outn.txt
...
5.000000 ; nth line of out1.txt
-5.200000 ; nth line of out2.txt
10.300000 ; nth line of outN.txt
Just to say that I'm very new to this sort of thing so I hope that the explanation above makes sense and also to say any help that you can provide would be much appreciated.
EDIT
Having now run the models, when using this code for the large data files created, there seems to be an issue of sorting of the imported data. This seems to occur primarily when there are repeated values for example the second row of data from each outfile has been combined in the following order by the script. It looks like there is some sorting based on the value of the data and not just based on the out file name:
Value ; out file text number
-1.215809 ; 1
-0.480543 ; 18
-0.480541 ; 19
-0.48054 ; 2
-0.480539 ; 20
-0.480538 ; 21
-0.480537 ; 22
-0.480536 ; 23
-0.480535 ; 24
-0.480534 ; 25
-0.480534 ; 26
-0.480688 ; 10
-0.480533 ; 27
-0.480532 ; 3
-0.480776 ; 4
-0.48051 ; 5
-0.48051 ; 6
-0.48051 ; 7
-0.48051 ; 8
-0.48051 ; 9
-0.48051 ; 11
-0.48051 ; 12
-0.48051 ; 13
I feel like I might have over complicated this answer but lets see how we do. Consider the following dummy data similar to your samples
Out1.txt Out2.txt Out3.txt
-0.40000 0.800000 4.100000
3.500000 0.300000 -0.90000
-2.60000 0.800000 2.200000
0.500000 1.800000 -1.40000
3.600000 1.800000 1.400000
40000000 -0.70000 1.500000
The file contents are arranged side by side for answer brevity and to help understand the output. The code is as follows:
$allTheFiles = #()
Get-ChildItem c:\temp\out*.txt | ForEach-Object{
$allTheFiles += ,(Get-Content $_.FullName)
}
For ($lineIndex=0; $lineIndex -lt $allTheFiles[0].Count; $lineIndex++){
For($fileIndex=0; $fileIndex -lt $allTheFiles.Count; $fileIndex++){
$allTheFiles[$fileIndex][$lineIndex]
}
} | Out-File -FilePath c:\temp\file.txt -Encoding ascii
Gather all out*.txt files the code creates an array of arrays which are the file contents themselves. Using nested For loops cycle though each single file outputting one line from each file at a time. While I am having a hard time being clear on what is happening if you compare the sample data to the output you should see that the first line or every file is outputted together followed by the next line...etc.
This code will produce the following output
-0.40000
0.800000
4.100000
3.500000
0.300000
-0.90000
-2.60000
0.800000
2.200000
0.500000
1.800000
-1.40000
3.600000
1.800000
1.400000
40000000
-0.70000
1.500000
Caveats
The code assumes that all files are of the same size. The number of lines is determined by the first file. If other files contain more data it would be lost in this model.

Gnuplot reading not locale encoding file

I want to plot data of an ISO_8859_1 encoded file (two columns of numbers). Those are the first 10 data points of the file:
#Pe2
1 0.8000
2 0.8000
3 0.8000
4 0.8000
5 0.8000
6 0.8000
7 0.8000
8 0.8000
9 0.8000
10 0.8000
The original file has 15000 data points. I create this data with MATLAB, specifically setting ISO_8859_1 encoding, so I am sure that that's the encoding. This is a snippet of the matlab code:
slCharacterEncoding('ISO-8859-1'); %Instruction before writing anything to the file.
fprintf(fileID,' %7d %7.4f',Tempo(i),y(i)); %For loop in this instruction
fprintf(fileID,'\r'); %Closing the file
fclose(fileID);
This is the script that I run. This file is encoded with the default Windows txt files encoding:
set encoding iso_8859_1
set terminal wxt size 1000,551
# Line width of the axes
set border linewidth 1.5
# Line styles
set style line 1 lc rgb '#dd181f' lt 1 lw 1 pt 0 # red
# Axes label
set xlabel 'tiempo'
set ylabel 'valor'
plot 'Pe2.txt' with lines ls 1
This is the output of the gnuplot console when I run the script. After that I input "show encoding":
G N U P L O T
Version 4.6 patchlevel 5 last modified February 2014
Build System: MS-Windows 32 bit
Copyright (C) 1986-1993, 1998, 2004, 2007-2014
Thomas Williams, Colin Kelley and many others
gnuplot home: http://www.gnuplot.info
faq, bugs, etc: type "help FAQ"
immediate help: type "help" (plot window: hit 'h')
Terminal type set to 'wxt'
gnuplot> cd 'C:\Example'
gnuplot> load 'script.txt'
"script.txt", line 10: warning: Skipping data file with no valid points
gnuplot> plot 'Pe2.txt' with lines ls 1
^
"script.txt", line 10: x range is invalid
gnuplot> show encoding
nominal character encoding is iso_8859_1
however LC_CTYPE in current locale is Spanish_Spain.1252
gnuplot>
If I open the file, make some change undo the change and save the file, gnuplot plots the file. I guess that it's because it saves it with local encoding which is the one gnuplot uses to read files.
How do I plot files with gnuplot which are not with the local encoding format?
I also have what it seems to be a similar problem when I output a file with VS2010Css. If I don't specifically set the culture with:
Thread.CurrentThread.CurrentUICulture = CultureInfo.GetCultureInfo("en-US");
Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo("en-US");
I am not able to save a file wich gnuplot is able to plot. I believe that this last problem is because of the "," and the "."
In Css I save the files with this:
StreamWriter Writer = new StreamWriter(dir + #"\" + + (k+1) + "_" + nombre + extension);
Writer.WriteLine("#" + (k+1) + "_" + nombre);
Writer.WriteLine();
Writer.WriteLine("{0,32} {1,32}", "#tiempo", "#valor");
for (int i = 0; i < tiempo.GetLength(0); i++)
{
Writer.WriteLine("{0,32} {1,32}", tiempo[i].ToString(), valor[i, k]);
}
Thank you.
Your file has only carriage returns (\r 0xd) as line breaks which doesn't work with gnuplot. You must use only line feed (\n 0xa), but \r\n does also work.

sed remove line containing a string and nothing but; automation using for loop

Q1: Sed specify the whole line and if the line is nothing but the string then delete
I have a file that contains several of the following numbers:
1 1
3 1
12 1
1 12
25 24
23 24
I want to delete numbers that are the same in each line. For that I have either been using:
sed '/1 1/d' < old.file > new.file
OR
sed -n '/1 1/!p' < old.file > new.file
Here is the main problem. If I search for pattern '1 1' that means I get rid of '1 12' as well. So for I want the pattern to specify the whole line and if it does, to delete it.
Q2: Automation of question 1
I am also trying to automate this problem. The range of numbers in the first column and the second column could be from 1 to 25.
So far this is what I got:
for ((i=1;i<26;i++)); do
sed "/'$i' '$i'/d" < oldfile > newfile; mv newfile oldfile;
done
This does nothing to the oldfile in the end. :(
This would be more readable with awk:
awk '$1 == $2 {next} {print}' oldfile > newfile
Update based on comment:
If the requirement is to remove lines where the two values are within 1 of each other:
awk '{d = $1-$2; if (-1 <= d && d <= 1) next; else print}' oldfile
Unfortunately, awk does not have abs() (at least nawk and gawk don't)
Just put the first number in a group (\([0-9]*\)) and then look for it with a backreference (\1). Since the line to delete should contain only the group, repeated, use the ^ to mark the beginning of line and the $ to mark the end of line. For example, for the following file:
$ cat input
1 1
3 1
12 1
1 12
12 12
12 13
13 13
25 24
23 24
...the result is:
$ sed '/^\([0-9]*\) \1$/d' input
3 1
12 1
1 12
12 13
25 24
23 24
You can also do it with grep:
grep -E -v "([0-9])*\s\1" testfile
Look for multiple digits in a row and remember them, followed by a single whitespace, followed by whatever digits you remembered.