Check if file exists on a Cisco switch - operating-system

I am trying to check if a file exists on the internal flash:/ disk of a Cisco switch.
switch-2950#dir flash:/
Directory of flash:/
2 -rwx 3721946 Jul 24 2009 16:17:10 +00:00 c2950-i6k2l2q4-mz.121-22.EA13.bin
3 -rwx 2035 Mar 01 1993 00:25:01 +00:00 config.text
5 drwx 4416 Jul 24 2009 16:19:50 +00:00 html
6 -rwx 556 Mar 01 1993 00:49:35 +00:00 vlan.dat
335 -rwx 315 Jul 24 2009 17:43:37 +00:00 env_vars
21 -rwx 112 Jul 24 2009 16:10:20 +00:00 info
22 -rwx 112 Jul 24 2009 16:20:56 +00:00 info.ver
23 drwx 64 Mar 01 1993 00:00:11 +00:00 crashinfo
25 -rwx 13495 May 18 2011 19:57:30 +00:00 config.old
336 -rwx 3832 Mar 01 1993 00:25:01 +00:00 private-config.text
7741440 bytes total (2124800 bytes free)
vlan.dat clearly exists. I can perform operations against it (such as copy).
However, I want to test if that particular file exists before performing
operations against it.
I am trying:
if os.path.isfile("flash:/vlan.dat"):
But it always returns False and the commands inside the 'if' statement are skipped over.
I have looked over numerous posts but they all cover Linux or Windows. I can't find anything regarding a Cisco file system.

I solved it another way. I do a 'dir flash:/', then I check for the existence
of the 'vlan.dat' file.
today = time.strftime("%x")
timenow = time.strftime("%X")
filename = (hostname + '-' + '%s' + '_' + '%s') % (today, timenow)
filename = filename.replace("/", "-")
filename = filename.replace(":", "-")
ssh_channel.send("dir flash:/" + "\n")
time.sleep(0.3)
outp = ssh_channel.recv(2000)
output = outp.decode("utf-8")
if 'vlan.dat' in output:
ssh_channel.send("copy flash:/vlan.dat tftp://192.168.1.106/" + filename + ".dat" + "\n")
time.sleep(0.3)

Related

Problems with empty cell in a large matrix: MATLAB

I have a large matrix of 102730 rows in the form of text file (sample text file is attached) with some header files in it. The first column show year, the next the month, followed by the day, and value1, value2 and value 3. Some of the cells are missing/empty. I want to fill these empty cells with NaN, so that they don't interefere with the next value.
This is the input matrix:
1970 01 13 21.0 6.1 06 000.0
1970 01 14 22.4 8.1 03 000.0
1970 01 15 21.2 8.1 04 000.0
1970 01 16 22.6 9.1 04 000.0
1970 01 17 22.8 9.1 02 000.0
1970 01 18 22.9 8.9 07 000.0
1970 01 19 23.8 10.8 04 000.0
1970 01 20 21.8 12.1 10 010.5
1970 01 21 19.8 06 012.9
1970 01 22 15.3 8.5 07 000.0
1974 06 28 39.2 25.6 03 000.0
1974 06 29 41.2 30.5 05 000.0
1974 06 30 40.3 31.2 07 000.0
1974 07 01 41.3 31.5 12 000.0
1974 07 02 43.3 31.3 20 000.0
1974 07 03 41.2 16 041.6
1974 07 04 34.3 21.4 14 054.5
1974 07 05 33.1 23.8 05 000.0
1974 07 06 36.2 28.9 06 000.0
1975 04 18 36.6 20.8 12 000.0
1975 04 19 37.4 21.1 05 000.0
1975 04 20 39.9 27.0 07 000.0
1975 04 21 39.5 27.3 09 000.0
1975 04 22
1975 04 23 39.5 27.1 08 000.0
1975 04 24 37.7 26.0 10 000.0
1975 04 25 38.7 27.2 15 000.0
The desired output matrix:
1970 01 13 21.0 6.1 06 000.0
1970 01 14 22.4 8.1 03 000.0
1970 01 15 21.2 8.1 04 000.0
1970 01 16 22.6 9.1 04 000.0
1970 01 17 22.8 9.1 02 000.0
1970 01 18 22.9 8.9 07 000.0
1970 01 19 23.8 10.8 04 000.0
1970 01 20 21.8 12.1 10 010.5
1970 01 21 19.8 Nan 06 012.9
1970 01 22 15.3 8.5 07 000.0
1974 06 28 39.2 25.6 03 000.0
1974 06 29 41.2 30.5 05 000.0
1974 06 30 40.3 31.2 07 000.0
1974 07 01 41.3 31.5 12 000.0
1974 07 02 43.3 31.3 20 000.0
1974 07 03 41.2 Nan 16 041.6
1974 07 04 34.3 21.4 14 054.5
1974 07 05 33.1 23.8 05 000.0
1974 07 06 36.2 28.9 06 000.0
1975 04 18 36.6 20.8 12 000.0
1975 04 19 37.4 21.1 05 000.0
1975 04 20 39.9 27.0 07 000.0
1975 04 21 39.5 27.3 09 000.0
1975 04 22 Nan Nan Nan Nan
1975 04 23 39.5 27.1 08 000.0
1975 04 24 37.7 26.0 10 000.0
1975 04 25 38.7 27.2 15 000.0
As an attempt, first I tried with this:
T = readtable('sample.txt') ;
Above code didn't work since it meshed up and gave the wrong number of columns when there 2 digits before the decimal. Secondly, I found this link: Creating new matrix from cell with some empty cells disregarding empty cells
The foll. code snippet may be useful from this link, but I don't know how to read the data directly from the text pad inorder to apply this code & subsequent retrieval process:
inds = ~cellfun('isempty', elem); %elem to be replaced as sample
I also find out the method to detect empty cells here: How do I detect empty cells in a cell array?
but I couldn't figure out how to read the data from a text file considering these empty cells.
Could anyone please help?
Since R2019a, you can simply use readmatrix:
>> myMat = readmatrix('sample.txt')
From the docs:
For delimited text files, the importing function converts empty fields in the file to either NaN (for a numeric variable) or an empty character vector (for a text variable). All lines in the text file must have the same number of delimiters. The importing function ignores insignificant white space in the file.
For previous releases, you can use detectImportOptions object when calling readtable:
% Detect options.
>> opts = detectImportOptions('sample.txt');
% Read table.
>> myTable = readtable('sample.txt',opts);
% Visualise last rows of table.
>> tail(myTable)
ans =
8×7 table
Var1 Var2 Var3 Var4 Var5 Var6 Var7
____ ____ ____ ____ ____ ____ ____
1975 4 18 36.6 20.8 12 0
1975 4 19 37.4 21.1 5 0
1975 4 20 39.9 27 7 0
1975 4 21 39.5 27.3 9 0
1975 4 22 NaN NaN NaN NaN
1975 4 23 39.5 27.1 8 0
1975 4 24 37.7 26 10 0
1975 4 25 38.7 27.2 15 0
For your text file, detectImportOptions is filling missing values with NaN :
>> opts.VariableOptions
If the desired output is a matrix, you can then use table2array:
>> myMat = table2array(myTable)

How to find scala java.util.Date difference in minutes?

scala> val dates = filtering1.map(x => (format.parse(x._1),format.parse(x._2)))
dates: org.apache.spark.rdd.RDD[(java.util.Date, java.util.Date)] = MapPartitionsRDD[7] at map at <console>:34
stores below values
scala> dates.collect
res0: Array[(java.util.Date, java.util.Date)] = Array((Sat Jun 30 23:42:00 IST 2018,Thu Jul 04 15:10:00 IST 2019), (Sat Jun 30 23:37:00 IST 2018,Sun Jul 01 14:44:00 IST 2018), (Sat Jun 30 23:13:00 IST 2018,Sun Feb 28 23:34:00 IST 219), (Sat Jun 30 22:58:00 IST 2018,Mon Jul 01 18:22:00 IST 2019), (Sat Jun 30 22:36:00 IST 2018,Mon Jul 01 16:01:00 IST 2019), (Sat Jun 30 21:53:00 IST 2018,Tue Jul 02 10:36:00 IST 2019), (Sat Jun 30 21:42:00 IST 2018,Sun Jun 30 23:25:00 IST 2019), (Sat Jun 30 21:36:00 IST 2018,Mon Jul 01 16:47:00 IST 2019), (Sat Jun 30 21:16:00 IST 2018,Mon Jul 01 18:18:00 IST 2019), (Sat Jun 30 21:10:00 IST 2018,Thu Jul 04 12:25:00 IST 2019), (Sat Jun 30 21:02:00 IST 2018,Sat Dec 01 17:29:00 IST 2018), (Sat Jun 30 20:54:00 IST 2018,Mon Jul 01 15:51:00 IST 2019), (Sat Jun 30 ...
But how to perform operation so the difference in dates, is grouped together, gives value in minutes.
I have command , it does not give me desired output, what changes should be made?
val time_diff = dates.map(x => (x._2.getTime()-x._1.getTime())/(60*1000)%60)
what is (60*1000)%60) values represent?
getTime gives milliseconds, so dividing by 1000.0 gives seconds and dividing by 1000.0*60 gives minutes. Be aware that dividing a Long by an Int gives you another Long, so you are truncating the resulting minutes to the next lowest integer. Adding modulus 60, % 60, simply wraps the minutes to 0-59, so if you had a 90 minutes difference, that would be 1 hour 30 minutes, and the result of your calculation would just be 30.
val t = System.currentTimeMillis
val x = new java.util.Date(t)
val y = new java.util.Date(t + 10000) // ten seconds later
(y.getTime - x.getTime) / (1000.0 * 60) // 0.167
(y.getTime - x.getTime) / (1000 * 60) // 0 !
First problem when you subtract two times, time might go negative.
Second, getTime returns value in miliseconds.
1000ms = 1 second
So, first there is need to divide it by 1000 to get time in seconds. To get in minutes,divide it again by 60.
Since you require result in minutes.
val time_diff = dates.map(x => (x._2.getTime()-x._1.getTime())/(60*1000))

Reading timeseries from multiple csv files in a folder: MATLAB

I have a bunch of csv files in a folder in following format, I want to extract complete time series from each (the numeric part from line #17), identify duplicate record and merge them in a ascending order according to year and date.
Specific csv file is accessible via google drive link below
wnsnum 1
paroms Waterhoogte
loccod HOEKVHLD
locoms Hoek van Holland
rks_begdat 1993 07 09
rks_begtyd 00:00
rks_enddat 2014 31 12
rks_endtyd 23:50
begdat begtyd enddat endtyd rkssta
1993 07 09 00:00 2007 31 12 23:50 D
2008 01 01 00:00 2009 30 12 23:50 G
2009 31 12 00:00 2009 31 12 23:50 O
2010 01 01 00:00 2011 17 06 18:40 G
2011 17 06 18:50 2011 18 06 18:50 O
2011 18 06 19:00 2014 31 12 23:50 G
datum tijd bpgcod waarde kwlcod
1993 07 09 00:00 -70 0
1993 07 09 00:10 -69 0
1993 07 09 00:20 -68 0
1993 07 09 00:30 -67 0
1993 07 09 00:40 -68 0
1993 07 09 00:50 -70 0
1993 07 09 01:00 -69 0
1993 07 09 01:10 -69 0
1993 07 09 01:20 -68 0
1993 07 09 01:30 -67 0
1993 07 09 01:40 -65 0
1993 07 09 01:50 -64 0
1993 07 09 02:00 -62 0
1993 07 09 02:10 -61 0
1993 07 09 02:20 -61 0
1993 07 09 02:30 -59 0
1993 07 09 02:40 -58 0
1993 07 09 02:50 -55 0
Now my code is working in a following way:
SL_files = dir(sprintf('%s%s%s',fullfile(dirName),'\','*.csv'));
for idx = 1:size(SL_files,1)
disp(SL_files(idx,1).name)
fid = fopen(sprintf('%s%s%s',fullfile(dirName),'\',SL_files(idx,1).name));
data = textscan(fid, '%s %f %f %f %f %f %f', ...
'Delimiter',',', 'MultipleDelimsAsOne',1,'headerlines',16);
fclose(fid);
end
Now I could read the file. Now my problem is how to combine multiple files' data into one matrix and arrange them in a ascending order according to year and day values. Thanks!
I finally solve my problem. Here is the code:
numMat_All = [];
for idx = 1:size(SL_files,1)
disp(SL_files(idx,1).name)
fid = fopen(sprintf('%s%s%s',fullfile(dirName),'\',SL_files(idx,1).name));
data = textscan(fid, '%s %f %f %f %f %f %f', ...
'Delimiter',',', 'MultipleDelimsAsOne',1,'headerlines',16);
fclose(fid);
CharCell = data{1,1};
result = regexprep(CharCell,'[\s;:]+',' ');
numMat = cell2mat(cellfun(#str2num, result(:,1:end), 'UniformOutput', false));
numMat_All = [numMat_All;numMat];
data = []; CharCell = []; result = []; numMat = [];
end
dt = datetime([numMat_All(:,1:5), repmat(0,length(numMat_All),1)]);
T = table(dt,numMat_All(:,[6:7]));
T1 = sortrows(T,'dt');

sed search and replace specific column only

Im taking weather day from wunderground.com and I and then trimming down the data for use on gnuplot. Im having trouble replacing the second column only data from number to months abbreviations. Only interested in the second column.
I want to go from this;
>2013 08 02 23 37 00 73.3
>2013 08 02 23 42 00 73.4
>2013 08 02 23 45 00 73.3
>2013 08 02 23 47 00 73.1
>2013 08 02 23 52 00 73.1
>2013 08 02 23 57 00 73.1
To this:
>2013 AUG 02 23 37 00 73.3
>2013 AUG 02 23 42 00 73.4
>2013 AUG 02 23 45 00 73.3
>2013 AUG 02 23 47 00 73.1
>2013 AUG 02 23 52 00 73.1
>2013 AUG 02 23 57 00 73.1
i am trying to use sed to change the numbers into the correct month and i keep getting this. I only want the correct sed expression to execute not all of them. This is the command i am trying to use.
sed -e 's/01/JAN/' -e 's/02/FEB/' -e 's/03/MAR/' -e 's/04/APR/' -e 's/05/MAY/' -e 's/06/JUN/' -e 's/07/JUL/' -e 's/08/AUG/' -e 's/09/SEP/' -e 's/10/OCT/' -e 's/11/NOV/' -e 's/12/DEC/'
How would i go about this.
This might work for you (GNU sed):
sed -nri 'G;s/$/01JAN02FEB03MAR04APR05MAY06JUN07JUL08AUG09SEP10OCT11NOV12DEC/;s/ (..)(.*)\1(...)/ \3\2/;P' file
This adds a lookup table to the end of each line and substitutes the key for the value.
I would use awk for this:
$ awk 'BEGIN{split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec",a)} {$2=a[$2+0]}1' a
>2013 Aug 02 23 37 00 73.3
>2013 Aug 02 23 42 00 73.4
>2013 Aug 02 23 45 00 73.3
>2013 Aug 02 23 47 00 73.1
>2013 Aug 02 23 52 00 73.1
>2013 Aug 02 23 57 00 73.1
To update the field with the new content, just redirect and then move:
awk .... file > temp_file && mv temp_file file
Explanation
What we do is to give awk a list of strings with the months names. Once we convert it into an array, a[1] will be Jan, a[2] Feb and so on. So then it is just a matter of replacing the 2nd field with a[2nd field].
BEGIN{split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec",a)} fetches the data and inserts into the a[] array.
{$2=a[$2+0]} sets the 2nd field as a[2nd field]. The $2+0 is done to convert 08 to 8.
Finally 1 evaluates as true and makes awk perform its default action: {print $0}.
Workaround that works for this problem (since your first column is very predictable) but not the general question:
sed -E -e 's/^([0-9]{4}) 01/\1 JAN/' -e 's/^([0-9]{4}) 02/\1 FEB/' etc.
awk has a sub function that could get unwieldy for many options you have here.
Perl scripts might be the best way to go.
$ awk '{$2=substr("JanFebMarAprMayJunJulAugSepOctNovDec",(3*$2)-2,3)}1' file
>2013 Aug 02 23 37 00 73.3
>2013 Aug 02 23 42 00 73.4
>2013 Aug 02 23 45 00 73.3
>2013 Aug 02 23 47 00 73.1
>2013 Aug 02 23 52 00 73.1
>2013 Aug 02 23 57 00 73.1
Since it came up in a comment:
The idiomatic awk way to map from a month number to a name is:
number = (match("JanFebMarAprMayJunJulAugSepOctNovDec",<name>)+2)/3
and the above is just the natural inverse of that:
name = substr("JanFebMarAprMayJunJulAugSepOctNovDec",(3*<number>)-2,3)
Like with anything in awk there's various ways to get the output you want but IMHO the symmetry here makes it an attractive solution:
awk 'BEGIN{
months = "JanFebMarAprMayJunJulAugSepOctNovDec"
name = "Jul"
number = (match(months,name)+2)/3
print name " -> " number
name = substr(months,(3*number)-2,3)
print number " -> " name
}'
Jul -> 7
7 -> Jul
Notice that the script uses the same definition for months no matter which direction the conversion is being done and it's a similar math calculation in both directions.
Nothing wrong with doing it this way too of course:
awk 'BEGIN{
split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec",num2name)
for (number in num2name) {
name2num[num2name[number]] = number
}
name = "Jul"
number = name2num[name]
print name " -> " number
name = num2name[number]
print number " -> " name
}'
Jul -> 7
7 -> Jul
Just a few more lines of code, nbd.
Using gnu awk's function strftime() and mktime()
awk '{$2=strftime("%b",mktime("2014 " $2 " 1 1 0 0"))}1' file
>2013 Aug 02 23 37 00 73.3
>2013 Aug 02 23 42 00 73.4
>2013 Aug 02 23 45 00 73.3
>2013 Aug 02 23 47 00 73.1
>2013 Aug 02 23 52 00 73.1
>2013 Aug 02 23 57 00 73.1
Explanation
mktime("2014 " $2 " 1 1 0 0") fake the epoch time, use the column 2 as month
strftime("%b",mktime("2014 " $2 " 1 1 0 0")) convert the epoch back to the date, with %b, export the abbreviated month name (Jan, Feb, etc)
The benefit with this awk:
It is shorter, of course. Second, you can control/adjust the format in strftime() to export any date format you like.
For example, if change to full month name %B. You needn't rewrite the code.
awk '{$2=strftime("%B",mktime("2014 " $2 " 1 1 0 0"))}1' file

Print ping response time while higher than a fixed value, including date

I like to have a running ping that only prints output while passing a threshold value.
And if passing this value, also add date to the output.
Here is what I have tried:
ping 8.8.8.8 | awk '{split($7,a,"[=.]");if (a[2]>58) print a[2],d}' d="$(date)"
ping 8.8.8.8 | awk '{"date"| getline date;split($7,a,"[=.]");if (a[2]>58) print a[2],date}'
Problem with both of these is that date is not updated. All is printed with same date.
59 Fri Nov 15 08:55:04 CET 2013
59 Fri Nov 15 08:55:04 CET 2013
59 Fri Nov 15 08:55:04 CET 2013
60 Fri Nov 15 08:55:04 CET 2013
59 Fri Nov 15 08:55:04 CET 2013
I know this could be solved using a bash script, but I just like to have a simple command line when testing line ping time.
The following works for me. I'm using OSX Mavericks:
ping 8.8.8.8 | awk -F"[= ]" '{if($10>50) {cmd="date"; cmd | getline dt; close(cmd) ; print $10, dt}}'
This will output the ping line for times > 50 ms.
I get this sample output:
51.352 Fri Nov 15 00:33:40 PST 2013
50.519 Fri Nov 15 00:33:42 PST 2013
52.407 Fri Nov 15 00:33:44 PST 2013
50.904 Fri Nov 15 00:33:50 PST 2013
52.864 Fri Nov 15 00:33:54 PST 2013
When you say:
ping 8.8.8.8 | awk '{split($7,a,"[=.]");if (a[2]>58) print a[2],d}' d="$(date)"
the variable d is evaluated only once due to which you get the same timestamp appended to all the lines in the output. You could instead use strftime as an argument to print:
ping 8.8.8.8 | awk '{split($7,a,"[=.]");if (a[2]>58) print a[2], strftime()}'