I fetch a online video with wget on Linux. May I ask the log format meaning of its downloading process as following? What does the #s at the end of each line intend to say?
Thank you!
0K .......... .......... .......... .......... .......... 2% 79.6K 26s
50K .......... .......... .......... .......... .......... 4% 317K 16s
100K .......... .......... .......... .......... .......... 7% 10.9M 10s
150K .......... .......... .......... .......... .......... 9% 322K 9s
200K .......... .......... .......... .......... .......... 11% 11.5M 7s
250K .......... .......... .......... .......... .......... 14% 327K 7s
300K .......... .......... .......... .......... .......... 16% 10.8M 6s
350K .......... .......... .......... .......... .......... 19% 11.6M 5s
400K .......... .......... .......... .......... .......... 21% 338K 5s
450K .......... .......... .......... .......... .......... 23% 10.8M 4s
500K .......... .......... .......... .......... .......... 26% 11.4M 4s
550K .......... .......... .......... .......... .......... 28% 11.0M 3s
600K .......... .......... .......... .......... .......... 31% 347K 3s
650K .......... .......... .......... .......... .......... 33% 10.8M 3s
700K .......... .......... .......... .......... .......... 35% 11.6M 3s
750K .......... .......... .......... .......... .......... 38% 10.9M 2s
800K .......... .......... .......... .......... .......... 40% 11.5M 2s
850K .......... .......... .......... .......... .......... 43% 10.9M 2s
900K .......... .......... .......... .......... .......... 45% 373K 2s
950K .......... .......... .......... .......... .......... 47% 11.4M 2s
1000K .......... .......... .......... .......... .......... 50% 10.9M 2s
1050K .......... .......... .......... .......... .......... 52% 11.5M 1s
1100K .......... .......... .......... .......... .......... 55% 11.0M 1s
1150K .......... .......... .......... .......... .......... 57% 11.3M 1s
1200K .......... .......... .......... .......... .......... 59% 11.7M 1s
1250K .......... .......... .......... .......... .......... 62% 10.8M 1s
1300K .......... .......... .......... .......... .......... 64% 11.6M 1s
1350K .......... .......... .......... .......... .......... 67% 412K 1s
1400K .......... .......... .......... .......... .......... 69% 10.7M 1s
1450K .......... .......... .......... .......... .......... 71% 40.1K 1s
1500K .......... .......... .......... .......... .......... 74% 35.8K 2s
1550K .......... .......... .......... .......... .......... 76% 35.8K 2s
1600K .......... .......... .......... .......... .......... 79% 11.0M 2s
1650K .......... .......... .......... .......... .......... 81% 35.8K 2s
1700K .......... .......... .......... .......... .......... 83% 35.8K 2s
1750K .......... .......... .......... .......... .......... 86% 35.8K 2s
1800K .......... .......... .......... .......... .......... 88% 11.0M 1s
1850K .......... .......... .......... .......... .......... 90% 35.8K 1s
1900K .......... .......... .......... .......... .......... 93% 35.8K 1s
1950K .......... .......... .......... .......... .......... 95% 35.8K 1s
2000K .......... .......... .......... .......... .......... 98% 54.5K 0s
2050K .......... .......... .......... ........ 100% 11.5M=15s
They simply represent the time remaining before the download ends, right after the download speed, and the percentage done.
Related
When I get results of Get-PhysicalDisk it appears that results are sorted as text and not as number:
Get-PhysicalDisk | Where-Object { $_.CanPool -eq $True } | Sort-Object -Property DeviceID
I get
Number OperationalStatus HealthStatus
10 SSD TRUE OK Healthy Auto-Select 6.82 TB
11 SSD TRUE OK Healthy Auto-Select 6.82 TB
12 SSD TRUE OK Healthy Auto-Select 6.82 TB
13 SSD TRUE OK Healthy Auto-Select 6.82 TB
14 SSD TRUE OK Healthy Auto-Select 6.82 TB
15 SSD TRUE OK Healthy Auto-Select 6.82 TB
16 SSD TRUE OK Healthy Auto-Select 6.82 TB
17 SSD TRUE OK Healthy Auto-Select 6.82 TB
18 SSD TRUE OK Healthy Auto-Select 6.82 TB
19 SSD TRUE OK Healthy Auto-Select 6.82 TB
20 SSD TRUE OK Healthy Auto-Select 6.82 TB
5 SSD TRUE OK Healthy Auto-Select 6.82 TB
6 SSD TRUE OK Healthy Auto-Select 6.82 TB
7 SSD TRUE OK Healthy Auto-Select 6.82 TB
8 SSD TRUE OK Healthy Auto-Select 6.82 TB
9 SSD TRUE OK Healthy Auto-Select 6.82 TB
So It appears that the sort is in the Text Order and not the number. How Do I order it as a number ?
.DeviceId is a string property so you would need to cast [int] using a calculated expression to sort them properly:
Get-PhysicalDisk -CanPool $true | Sort-Object { [int] $_.DeviceId }
For example, let's say that I have an invoice for 1000 CUA (Currency A) and the exchange rate is 1 CUA = 20.20 CUB (Currency B). So I make 10 payments of 2019.90 CUB
#
Payment (CUB)
Payment (CUA)
Balance
0
1000.00
1
2019.90
100.00
900.00
2
2019.90
100.00
800.00
3
2019.90
100.00
700.00
4
2019.90
100.00
600.00
5
2019.90
100.00
500.00
6
2019.90
100.00
400.00
7
2019.90
100.00
300.00
8
2019.90
100.00
200.00
9
2019.90
100.00
100.00
10
2019.90
100.00
0.00
Σ
20199.00
1000.00
1000.00 CUA is 20200.00 CUB but total payments were only 20199.00 CUB
I'm logging energy usage data as a counter, which I would like to display as cumulative graphs that reset daily, as similarly asked here.
I can generate the cumulative value as follows:
SELECT mean("value") \
FROM "energy" \
WHERE $timeFilter \
GROUP BY time($__interval)
and the daily value as well:
SELECT max("value") \
FROM "energy" \
WHERE $timeFilter \
GROUP BY time(1d)
but I cannot subtract this or get this in one query, because the GROUP BY times are different.
(How) is this possible in influxdb? I've looked at INTEGRATE() but this haven't found a way to make this working.
The data looks like this (example limited to 1 day):
time value
---- ----
2018-12-10T17:00:00Z 7
2018-12-10T18:00:00Z 9
2018-12-10T19:00:00Z 10
2018-12-10T20:00:00Z 11
2018-12-10T21:00:00Z 13
2018-12-10T22:00:00Z 14
2018-12-10T23:00:00Z 15
2018-12-11T00:00:00Z 16
2018-12-11T01:00:00Z 17
2018-12-11T02:00:00Z 20
2018-12-11T03:00:00Z 24
2018-12-11T04:00:00Z 25
2018-12-11T05:00:00Z 26
2018-12-11T06:00:00Z 27
2018-12-11T07:00:00Z 28
2018-12-11T08:00:00Z 29
2018-12-11T09:00:00Z 31
2018-12-11T10:00:00Z 32
2018-12-11T11:00:00Z 33
2018-12-11T12:00:00Z 34
2018-12-11T13:00:00Z 35
2018-12-11T14:00:00Z 36
2018-12-11T15:00:00Z 37
2018-12-11T16:00:00Z 38
2018-12-11T17:00:00Z 39
I can plot the following:
But I want something like:
I found a solution, it's quite simple in the end:
SELECT kaifa-kaifa_fill as Energy FROM
(SELECT first(kaifa) as kaifa_fill from energyv2 WHERE $timeFilter group by time(1d) TZ('Europe/Amsterdam')),
(SELECT first(kaifa) as kaifa from energyv2 WHERE $timeFilter GROUP BY time($__interval))
fill(previous)
Note the fill(previous) is required to ensure kaifa_fill and kaifa overlap.
Example data:
time kaifa kaifa_fill kaifa_kaifa_fill
---- ----- ---------- ----------------
2019-08-03T00:00:00Z 179688195 179688195 0
2019-08-03T01:00:00Z 179746833 179688195 58638
2019-08-03T02:00:00Z 179803148 179688195 114953
2019-08-03T03:00:00Z 179859464 179688195 171269
2019-08-03T04:00:00Z 179914038 179688195 225843
2019-08-03T05:00:00Z 179967450 179688195 279255
2019-08-03T06:00:00Z 179905910 179688195 217715
2019-08-03T07:00:00Z 179847272 179688195 159077
2019-08-03T08:00:00Z 179698065 179688195 9870
2019-08-03T09:00:00Z 179378170 179688195 -310025
2019-08-03T10:00:00Z 179341013 179688195 -347182
2019-08-03T11:00:00Z 179126201 179688195 -561994
2019-08-03T12:00:00Z 179039116 179688195 -649079
2019-08-03T13:00:00Z 178935193 179688195 -753002
2019-08-03T14:00:00Z 178687870 179688195 -1000326
2019-08-03T15:00:00Z 178517762 179688195 -1170433
2019-08-03T16:00:00Z 178409776 179688195 -1278420
2019-08-03T17:00:00Z 178376102 179688195 -1312093
2019-08-03T18:00:00Z 178388875 179688195 -1299320
2019-08-03T19:00:00Z 178780181 179688195 -908015
2019-08-03T20:00:00Z 178928226 179688195 -759969
2019-08-03T21:00:00Z 179065241 179688195 -622954
2019-08-03T22:00:00Z 179183098 179688195 -505098
2019-08-03T23:00:00Z 179306179 179688195 -382016
2019-08-04T00:00:00Z 179306179 179370042 -63863
2019-08-04T00:00:00Z 179370042 179370042 0
2019-08-04T01:00:00Z 179417649 179370042 47607
2019-08-04T02:00:00Z 179464094 179370042 94053
2019-08-04T03:00:00Z 179509960 179370042 139918
2019-08-04T04:00:00Z 179591820 179370042 221779
2019-08-04T05:00:00Z 179872817 179370042 502775
2019-08-04T06:00:00Z 180056278 179370042 686236
2019-08-04T07:00:00Z 179929713 179370042 559671
2019-08-04T08:00:00Z 179514604 179370042 144562
2019-08-04T09:00:00Z 179053049 179370042 -316992
2019-08-04T10:00:00Z 178683225 179370042 -686817
2019-08-04T11:00:00Z 178078269 179370042 -1291773
2019-08-04T12:00:00Z 177650387 179370042 -1719654
2019-08-04T13:00:00Z 177281724 179370042 -2088317
2019-08-04T14:00:00Z 177041367 179370042 -2328674
2019-08-04T15:00:00Z 176807397 179370042 -2562645
2019-08-04T16:00:00Z 176737148 179370042 -2632894
2019-08-04T17:00:00Z 176677349 179370042 -2692693
2019-08-04T18:00:00Z 176690702 179370042 -2679340
2019-08-04T19:00:00Z 176734825 179370042 -2635216
2019-08-04T20:00:00Z 176810300 179370042 -2559742
2019-08-04T21:00:00Z 176866035 179370042 -2504007
2019-08-04T22:00:00Z 176914803 179370042 -2455239
2019-08-04T23:00:00Z 176965893 179370042 -2404149
2019-08-05T00:00:00Z 176965893 177016983 -51090
2019-08-05T00:00:00Z 177016983 177016983 0
Example graph:
My table is something like this
id ...... amount...........food
+++++++++++++++++++++++++++++++++++++
1 ........ 5 ............. banana
1 ........ 4 ............. strawberry
2 ........ 2 ............. banana
2 ........ 7 ............. orange
2 ........ 8 ............. strawberry
3 ........ 10 .............lime
3 ........ 12 .............banana
What I want is a table display each food, with the average number of times it appears in each ID.
The table should look something like this I think:
food ........... avg............
++++++++++++++++++++++++++++++++
banana .......... 6.3 ............
strawberry ...... 6 ............
orange .......... 7 ............
lime ............ 10 ............
I'm not really sure on how to do this. If I use just avg(amount) then it will just add the whole amount column
Did you try GROUP BY?
SELECT food, AVG(amount) "avg"
FROM table1
GROUP BY food
Here is SQLFiddle
Output:
| food | avg |
|------------|-------------------|
| lime | 10 |
| orange | 7 |
| strawberry | 6 |
| banana | 6.333333333333333 |
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 9 years ago.
i need to find max value for each column in every 1 min out of 12 value means logs will come every 5 sec , so every 1 min i need to find max value for each column
02 11:23:18 03 004 009 009 001 002 002 001 001 001 001 004 000 000 258 258 000 00 4/05/2013
01 11:23:22 01 001 001 001 001 001 001 002 001 001 001 004 000 000 000 000 000 00 4/05/2013
02 11:23:23 01 002 006 012 001 002 002 002 002 002 001 004 000 000 241 241 000 00 4/05/2013
01 11:23:27 01 001 002 005 004 006 001 003 001 001 001 004 000 000 000 000 000 00 4/05/2013
02 11:23:28 01 003 001 002 001 002 001 002 001 001 001 004 000 000 256 257 000 00 4/05/2013
01 11:23:32 01 001 001 001 001 001 001 002 001 001 006 009 000 000 000 000 000 00 4/05/2013
02 11:23:33 02 003 003 015 002 005 002 002 001 001 001 004 000 000 204 205 000 00 4/05/2013
01 11:23:37 02 001 001 001 001 002 001 003 001 001 001 005 000 000 000 000 000 00 4/05/2013
02 11:23:38 01 002 001 009 001 004 009 003 001 001 001 004 000 000 266 267 000 00 4/05/2013
01 11:23:42 01 001 001 000 001 001 001 002 001 001 002 011 000 000 000 000 000 00 4/05/2013
02 11:23:43 01 002 002 009 001 002 001 004 000 002 001 004 000 000 195 195 000 00 4/05/2013
need max value for column 3rd to 14th , i am new in perl so please excuse
This works for me:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.010;
my #maxima;
my $prevmin = "";
sub print_maxima
{
print "#maxima\n" if (scalar(#maxima) > 0);
#maxima = ();
}
while (<>)
{
my(#row) = split;
my($hhmm) = substr $row[1], 1, 5;
if ($hhmm ne $prevmin)
{
print_maxima;
$prevmin = $hhmm;
}
foreach my $col (0..(scalar(#row)-1))
{
$maxima[$col] //= $row[$col]; # Avoid undef values
$maxima[$col] = $row[$col] if ($row[$col] gt $maxima[$col]);
}
}
print_maxima;
Given an extended version of your sample data, carefully crafted so that the maximum values from the second minute are always strictly one less than the value from the first minute unless the values were all zeros:
02 11:23:18 03 004 009 009 001 002 002 001 001 001 001 004 000 000 258 258 000 00 4/05/2013
01 11:23:22 01 001 001 001 001 001 001 002 001 001 001 004 000 000 000 000 000 00 4/05/2013
02 11:23:23 01 002 006 012 001 002 002 002 002 002 001 004 000 000 241 241 000 00 4/05/2013
01 11:23:27 01 001 002 005 004 006 001 003 001 001 001 004 000 000 000 000 000 00 4/05/2013
02 11:23:28 01 003 001 002 001 002 001 002 001 001 001 004 000 000 256 257 000 00 4/05/2013
01 11:23:32 01 001 001 001 001 001 001 002 001 001 006 009 000 000 000 000 000 00 4/05/2013
02 11:23:33 02 003 003 015 002 005 002 002 001 001 001 004 000 000 204 205 000 00 4/05/2013
01 11:23:37 02 001 001 001 001 002 001 003 001 001 001 005 000 000 000 000 000 00 4/05/2013
02 11:23:38 01 002 001 009 001 004 009 003 001 001 001 004 000 000 266 267 000 00 4/05/2013
01 11:23:42 01 001 001 000 001 001 001 002 001 001 002 011 000 000 000 000 000 00 4/05/2013
02 11:23:43 01 002 002 009 001 002 001 004 000 002 001 004 000 000 195 195 000 00 4/05/2013
02 11:24:18 03 003 008 009 001 002 002 001 001 001 001 004 000 000 258 258 000 00 4/05/2013
01 11:24:22 01 001 001 001 001 001 001 002 001 001 001 004 000 000 000 000 000 00 4/05/2013
01 11:24:23 01 002 006 012 001 002 002 002 001 001 001 004 000 000 241 241 000 00 4/05/2013
01 11:24:27 01 001 002 005 003 005 001 003 001 001 001 004 000 000 000 000 000 00 4/05/2013
01 11:24:28 01 003 001 002 001 002 001 002 001 001 001 004 000 000 256 257 000 00 4/05/2013
01 11:24:32 01 001 001 001 001 001 001 002 001 001 005 009 000 000 000 000 000 00 4/05/2013
02 11:24:33 02 003 003 014 002 005 002 002 001 001 001 004 000 000 204 205 000 00 4/05/2013
01 11:24:37 02 001 001 001 001 002 001 003 001 001 001 005 000 000 000 000 000 00 4/05/2013
01 11:24:38 01 002 001 009 001 004 008 003 001 001 001 004 000 000 265 266 000 00 4/05/2013
01 11:24:41 01 001 001 000 001 001 001 002 001 001 002 010 000 000 000 000 000 00 4/05/2013
01 11:24:42 01 002 002 009 001 002 001 003 000 001 001 004 000 000 195 195 000 00 4/05/2013
the output of the script is:
02 11:23:43 03 004 009 015 004 006 009 004 002 002 006 011 000 000 266 267 000 00 4/05/2013
02 11:24:42 03 003 008 014 003 005 008 003 001 001 005 010 000 000 265 266 000 00 4/05/2013
The script is a simple control-break report, based on the hh:mm portion of the second column. The maxima comparison exploits the leading zeroes on the data, using string comparison (gt) rather than numeric comparison. It scans over all the columns, so it reports the largest time within the minute in column 2.
It would get confused with the following adjacent data lines:
01 11:24:41 01 001 001 000 001 001 001 002 001 001 002 010 000 000 000 000 000 00 4/05/2013
01 11:24:42 01 002 002 009 001 002 001 003 000 001 001 004 000 000 195 195 000 00 4/06/2013
Note that the date portion changed, so the rows belong to two different days, but they'd be aggregated into the same minute because the code does not look at the date column. It also is not clear whether your date format is mm/dd/yyyy or dd/mm/yyyy; either could be valid. It's better to use yyyy-mm-dd format; it is unambiguous and sorts into date order automatically.
It's Perl — TMTOWTDI (There's More Than One Way To Do It).
The body of the foreach my $col loop could be replaced with:
$maxima[$col] = $row[$col] if (!defined $maxima[$col] || $row[$col] gt $maxima[$col]);
This avoids the need for Perl 5.10 (the //= operator was added then). I doubt if you'd be able to measure the difference in performance. The foreach control could also be a simple for (my $col = 0; $col < scalar(#row); $col++); again, there's not much to choose between the two in this case, though if the number of columns were humongous (thousands of columns), the for would use less memory than the foreach.