I have result of query in form
EmpId Profit OrderID CompanyName
------ ------ ------- --------------
1 500 $ 1 Acme Company
1 200 $ 1 Evolve Corp.
2 400 $ 1 Acme Company
2 100 $ 1 Evolve Corp.
3 500 $ 1 Acme Company
3 500 $ 1 Evolve Corp.
Now the desired report format is
EmpId OrderId Acme's Profit Evolve's Profit
----- ------ ------------- ---------------
1 1 700 $ 700 $
Total ---- ----- ------
700$ 700$
2 1 500 $ 500 $
Total ---- ----- ------
500$ 500$
3 3 1000 $ 1000 $
Total ---- ----- ------
1000$ 1000$
----- ---- ---------- ----------
GrandTotal 2200 $ 2200 $
I tried hard at the crosstab but I'm unable to figure out how to group the records. I tried moving CompanyName in CrossTab columns and moved EmpId in rows & tried a cross tab group but results are not as expected.
My questions are
1) Is this format achievable with a cross tab ?
2) How do I group record's by EmpId's in my crosstab in such a way that the Companies are moved horizontally ?
Edit:
I also need sub totals & grand total field.
Put the company in the columns section and you're good to go.
Related
I am trying to write a script about the state of the server hardware
The following functions have been completed.
Get-SummaryStatus.ps1
Get-Memory.ps1
Get-PowerSupply.ps1
Get-Proecssor.ps1
Get-Fanstatus.ps1
Get-SummaryStatus.ps1 output is
Server ServerStatus
------ ------------
AAA {System.Collections.Hashtable}
$report.server output is
AAA
$report.serverstatus output is
Name Value
---- -----
MemoryStatus OK
PowerStatus OK
ProcessorStatus OK
Fanstatus OK
Get-Memory.ps1 output is
Server MemoryStatus
------ ------------
AAA {System.Collections.Hashtable, System.Collections.Hashtable}
$report.server output is
AAA
$report.MemoryStatus output is
Name Value
---- -----
MemoryID 1
MemoryStatus OK
MemoryID 2
MemoryStatus OK
Get-PowerSupply.ps1 output is
Server MemoryStatus
------ ------------
AAA {System.Collections.Hashtable, System.Collections.Hashtable}
$report.server output is
AAA
$report.PowerStatus output is
Name Value
---- -----
PowerID 1
PowerStatus OK
PowerID 2
PowerStatus OK
Get-Proecssor.ps1 output Similar to the three above
Get-Fanstatus.ps1 output Similar to the three above
Duo to I have more than 1000 services that need to be monitored, So I want to make the following steps to improve efficiency.
I created a new main function(ServerHealthCheck) and call all sub-founcitons(summary,memory,power...) to main function. (completed)
If the summary state are OK, and then output summarystatus. (completed.)
If Summary memory status is failed. and then will call Get-Memory.ps1 to find which memory is bad.(completed)
If Summary power status is failed. and then will call Get-powersupply.ps1 to find which powersupply is bad.(completed.)
if One of the hardware was failed , I want to rearrange a new object.
my question is How to add/replace object to get expected output? thanks in advance.
Expected output(if summary memory status is failed):
Server ServerStatus
------ ------------
AAA {System.Collections.Hashtable}
$report.server output is
AAA
$report.serverstatus output is
Name Value
---- -----
MemoryID 1
MemoryStatus failed
PowerStatus OK
ProcessorStatus OK
Fanstatus OK
I am trying to monitor the running process in my machine.
To achieve this i leveraged Telegraf, Influxdb, Grafana
In telegraf I used procstat plugin and i used the tag procstat_lookup
My telegraf conf file:
[[inputs.procstat]]
pid_tag = true
exe = ""
systemd_unit = "sshd"
[[inputs.procstat]]
pid_tag = true
exe = ""
systemd_unit = "influxd"
When i run the query Select * from procstat_lookup where time >= now() - 120s in Ubuntu Machine I get the output:
time exe pattern pid_count pid_finder result result_code running
systemd_unit
---- --- ------- --------- ---------- ------ ----------- ------- ------------
1569906900000000000 1 pgrep success 0 1 apache2
1569906900000000000 1 pgrep success 0 1 sshd
But when i run the same query in Centos Machine I get the output:
time pid_count systemd_unit
---- --------- ------------
1569909600000000000 1 apache2
1569909600000000000 1 sshd
I wonder why there is a different output for the same configuration in the two different OS
I'm running a sysbench OLTP benchmark with TiDB on GKE local SSD disk. But I'm getting poor performance compared to GKE persistent SSD disk. How can I get the expected IOPS performance on GKE local SSD disk by default?
I've run TiDB OLTP benchmark and fio benchmark with psync engine but the results both shows IOPS on local SSD disk is poorer than on persistent SSD disk. And I've also run a thorough blktrace analysis. The fio command I ran is:
fio -ioengine=psync -bs=32k -fdatasync=1 -thread -rw=write -size=10G -filename=test -name="max throughput" -iodepth=1 -runtime=60 -numjobs=4 -group_reporting
The fio benchmark result for local SSD disk and persistent disk is:
| disk type | iops | bandwidth |
|---------------------+------+-----------|
| local SSD disk | 302 | 9912kB/s |
| persistent SSD disk | 1149 | 37.7MB/s |
And the blktrace btt result is:
==================== All Devices ====================
ALL MIN AVG MAX N
--------------- ------------- ------------- ------------- -----------
Q2Q 0.000000002 0.003716416 14.074086987 34636
Q2G 0.000000236 0.000005730 0.005347758 25224
G2I 0.000000727 0.000005446 0.002450425 20575
Q2M 0.000000175 0.000000716 0.000027069 9447
I2D 0.000000778 0.000003197 0.000111657 20538
M2D 0.000001941 0.000011350 0.000431655 9447
D2C 0.000065510 0.000182827 0.001366980 34634
Q2C 0.000072793 0.001181298 0.023394568 34634
==================== Device Overhead ====================
DEV | Q2G G2I Q2M I2D D2C
---------- | --------- --------- --------- --------- ---------
( 8, 48) | 0.3532% 0.2739% 0.0165% 0.1605% 15.4768%
---------- | --------- --------- --------- --------- ---------
Overall | 0.3532% 0.2739% 0.0165% 0.1605% 15.4768%
According to the optimization guide, I’ve manually remounted the disk with the nobarrier option and the blktrace btt result looks as normal.
==================== All Devices ====================
ALL MIN AVG MAX N
--------------- ------------- ------------- ------------- -----------
Q2Q 0.000000006 0.000785969 12.031454829 123537
Q2G 0.000003929 0.000006162 0.005294881 94553
G2I 0.000004677 0.000029263 0.004555917 94553
Q2M 0.000004069 0.000005337 0.000328930 29019
I2D 0.000005166 0.000020476 0.001078527 94516
M2D 0.000012816 0.000056839 0.001113739 29019
D2C 0.000081435 0.000358712 0.006724447 123535
Q2C 0.000113965 0.000415489 0.006763290 123535
==================== Device Overhead ====================
DEV | Q2G G2I Q2M I2D D2C
---------- | --------- --------- --------- --------- ---------
( 8, 48) | 1.1351% 5.3907% 0.3017% 3.7705% 86.3348%
---------- | --------- --------- --------- --------- ---------
Overall | 1.1351% 5.3907% 0.3017% 3.7705% 86.3348%
However, according to RedHat’s document, the nobarrier mount option should only have a very small negative impact on performance (about 3%) and it is not recommended to use it on storage configured on virtual machines.
The use of nobarrier is no longer recommended in Red Hat Enterprise Linux 6 as the negative performance impact of write barriers is negligible (approximately 3%). The benefits of write barriers typically outweigh the performance benefits of disabling them. Additionally, the nobarrier option should never be used on storage configured on virtual machines.
In addition to the nobarrier option, the local SSD disk optimization guide also suggests installing the Linux Guest Environment but states that it’s already installed on newer VM image. However, I found that it’s not installed on the GKE node.
So I manually installed the Linux Guest Environment and tested again, this time the btt result looks as expected:
==================== All Devices ====================
ALL MIN AVG MAX N
--------------- ------------- ------------- ------------- -----------
Q2Q 0.000000001 0.000472816 21.759721028 301371
Q2G 0.000000215 0.000000925 0.000110353 246390
G2I 0.000000279 0.000003579 0.003997348 246390
Q2M 0.000000175 0.000000571 0.000106259 54982
I2D 0.000000609 0.000002635 0.004064992 246390
M2D 0.000001400 0.000005728 0.000509868 54982
D2C 0.000051100 0.000451895 0.009107264 301372
Q2C 0.000054091 0.000458881 0.009111984 301372
==================== Device Overhead ====================
DEV | Q2G G2I Q2M I2D D2C
---------- | --------- --------- --------- --------- ---------
( 8, 80) | 0.1647% 0.6376% 0.0227% 0.4695% 98.4778%
---------- | --------- --------- --------- --------- ---------
Overall | 0.1647% 0.6376% 0.0227% 0.4695% 98.4778%
So how can I get the expected IOPS performance on GKE local SSD disk by default without extra tuning?
I use ipptool to get the status of the current print job.
C:\Users\Administrator>ipptool http://localhost/ipp/printers get-completed-jobs.test
job-id job-state job-name job-originating-user-name job-media-sheets-completed
------ --------- -------- ------------------------- --------------------------
14 canceled RedHat 1
13 completed RedHat 1
12 completed RedHat 1
11 completed RedHat 1
How do I get the specified job-id and job-state?
What method does Powershell use to intercept strings?
Question 1:
Get the following string:
14 canceled
Question 2 :
Get the following string:
13 completed
12 completed
Question 3 :
How do I get the most recent job-id and job-state?
The cmdlet ConvertFrom-SourceTable available for download from the PowerShell gallery (GitHub: iRon7/ConvertFrom-SourceTable) is capable of reading this type of data tables:
$Jobs = ConvertFrom-SourceTable '
job-id job-state job-name job-originating-user-name job-media-sheets-completed
------ --------- -------- ------------------------- --------------------------
14 canceled RedHat 1
13 completed RedHat 1
12 completed RedHat 1
11 completed RedHat 1
'
In your case, it is probably something like:
$Jobs = $(.\ipptool http://localhost/ipp/printers get-completed-jobs.test) | ConvertFrom-SourceTable
The rest of your questions are actually a matter of basic PowerShell commands.
As in this example, the Jobs object will give you access to e.g. the status of job 14:
$Jobs | ?{$_."job-id" -eq 14} | Select -Expand "job-state"
canceled
And "How do I get the most recent job-id and job-state?":
(presuming that the most recent job is always on top)
$Jobs | Select "job-id", "job-state" -First 1
job-id job-state
------ ---------
14 canceled
(For other ConvertFrom-SourceTable examples see: https://stackoverflow.com/search?q=ConvertFrom-SourceTable)
I'm having issues merging data for coverage on Perl scripts and modules.. Running Devel::Cover individually works just fine, but when I try to combine the data I lose statistics for just the Perl script not the module..
Let me explain..
I have a directory tree that looks like so..
Code_Coverage_Test
|
|---->lib
|
|---->t
|
Inside the root Code_Coverage_Test directory I have the Build.pl file that builds the tests for the module and script that kickoff two other scripts that automate some commands for me..
./Build.pl
#!/usr/bin/perl -w
use strict;
use Module::Build;
my $buildTests = Module::Build->new(
module_name => 'testPMCoverage',
license => 'perl',
dist_abstract => 'Perl .pm Test Code Coverage',
dist_author => 'me#myEmail.com',
build_requires => {
'Test::More' => '0.10',
},
);
$buildTests->create_build_script();
./startTests.sh
#!/bin/sh
cd t
./doPMtest.sh
./doPLtest.sh
cd ../
perl Build testcover
Inside the lib dir I have the files I'm trying to run Code coverage on..
lib/testPLCoverage.pl
#!/usr/bin/perl -w
use strict;
print "Ok!";
lib/testPMCoverage.pm
use strict;
use warnings;
package testPMCoverage;
sub hello {
return "Hello";
}
sub bye {
return "Bye";
}
1;
In the t dir I have my .t test file for the module and 2 scripts that kickoff the tests for me.. Both of which are called by the startTests.sh in the root directory
t/testPMCoverage.t
#!/usr/bin/perl -w
use strict;
use Test::More;
require_ok( 'testPMCoverage' );
my $test = testPMCoverage::hello();
is($test, "Hello", "hello() test");
done_testing();
t/doPLtest.sh
#!/bin/sh
#Test 1
cd ../
cd lib
perl -MDevel::Cover=-db,../cover_db testPLCoverage.pl
t/doPMtest.sh
#!/bin/bash
cd ../
perl Build.pl
perl Build test
The issue I'm running into is that when the doPLtests.sh script runs, I get coverage data, no problem..
---------------------------- ------ ------ ------ ------ ------ ------ ------
File STMT Bran Cond Sub pod Time total
---------------------------- ------ ------ ------ ------ ------ ------ ------
testPLCoverage.pl 100.0 n/a n/a 100.0 n/a 100.0 100.0
Total 100.0 n/a n/a 100.0 n/a 100.0 100.0
---------------------------- ------ ------ ------ ------ ------ ------ ------
However, when the doPMtest.sh script finishes and the startTests.sh script initiates the Build testcover command I lose that data along the way and I get these messages ...
Reading database path/Code_Coverage_Tests/cover_db
Devel::Cover: Warning: can't open testPLCoverage.pl for MD5 digest: No such file or directory
Devel::Cover: Warning: can't locate structure for statement in testPLCoverage.pl
Devel::Cover: Warning: can't locate structure for subroutine in testPLCoverage.pl
Devel::Cover: Warning: can't locate structure for time in testPLCoverage.pl
..and somehow I lose the data
---------------------------- ------ ------ ------ ------ ------ ------ ------
File STMT Bran Cond Sub pod Time total
---------------------------- ------ ------ ------ ------ ------ ------ ------
blib/lib/testPMCoverage.pm 87.5 n/a n/a 75.0 0.0 100.0 71.4
testPLCoverage.pl n/a n/a n/a n/a n/a n/a n/a
Total 87.5 n/a n/a 75.0 0.0 100.0 71.4
---------------------------- ------ ------ ------ ------ ------ ------ ------
How can I combine the Perl module and Perl script tests to get valid code coverage in ONE file?
Perl doesn't store the full path to the files it uses. If it finds the file via a relative path then only the relative path is stored. You can see this in the paths perl shows in the warning and error messages from those files.
When Devel::Cover deals with files it uses the path given by perl. You can see this in the reports from Devel::Cover where you have testPLCoverage.pl and blib/lib/testPMCoverage.pm.
What this means for you in practice is that whenever you put coverage into a coverage DB you should ensure that you are doing it from the same directory, so that Devel::Cover can match and locate the files in the coverage DB.
I think this is the problem you are hitting.
My suggestion is that in t/doPLtest.sh you don't cd into lib. You can run something like:
perl -Mblib -MDevel::Cover=-db,../cover_db lib/testPLCoverage.pl
(As an aside, why is that file in lib?)
I think that would mean that Devel::Cover would be running from the project root in each case and so should allow it to match and find the files.