The following command in a cmd window
sqlcmd -S. -Usa -Ppass -dmaster -Q "RESTORE DATABASE [MYDATABASE] FROM DISK = 'D:\SQL Server\MYDATABASE.BAK' WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10"
displays the following progress output:
10 percent processed.
20 percent processed.
30 percent processed.
40 percent processed.
50 percent processed.
60 percent processed.
70 percent processed.
80 percent processed.
90 percent processed.
100 percent processed.
Processed 32320 pages for database 'MYDATABASE', file 'MYDATABASE' on file 1.
Processed 7 pages for database 'MYDATABASE', file 'MYDATABASE_log' on file 1.
But it turns that the progress is shown only after the entire restore, turning the stats during the process useless.
Any advice?
Here is the version of sqlcmd tool:
Microsoft (R) SQL Server Command Line Tool
Version 12.0.2000.8 NT
Copyright (c) 2014 Microsoft. All rights reserved.
Update Dec-2016:
Just including the comment from Microsoft Connect link shared in comments:
SQLCMD was rewritten in SQL 2012 to use ODBC. Here is a small
regression error that appears to have sneaked in.
It's the same effect reported when calling RAISERROR('Hello', 0, 1) WITH NOWAIT along a script.
I believe you can look in the SQL logs to see the progress ongoing.
you can query percent_complete in sys.dm_exec_requests
use start to open a separate window and issue a select percent_complete from sys.dm_exec_requests where percent_complete > 0
Related
I try to add a cron job on an elearning.mysite.gr(Moodle). Although my host gives this message every hour.
Oct 2 1:10:01 linux CROND[123456]: (admin) CMD (touch /tmp/test.txt > /dev/null)
On my site administration -> notifications get this message:
The admin/cli/cron.php script has not been run for 3 days 2 hours and should run every 1 min.
At moodle documentation
/path/to/moodle/admin/cli/cron.php, can i use it ? in which way ?
I tried this :
/usr/bin/php /path/to/moodle/admin/cli/cron.php
but gives me that the process completed with error at one minute
only this touch /tmp/test.txt > /dev/null
completed with success.
Moodle Documentation:
The CLI (command line interpreter) script. This will be at the path /path/to/moodle/admin/cli/cron.php
If in doubt, this is the correct script to use. This needs to be run by a 'PHP CLI' program on your computer. So the final command may look something like /usr/bin/php /path/to/moodle/admin/cli/cron.php You can (and should) try this on your command line to see if it works. WARNING: Check your command-line PHP version is compatible with your chosen version of Moodle.-How to check it?
The command-line PHP program is different to the one running your web site and is not always the same version.
I don't know what to do...I will appreciate any help!!!
I tried by my host panel interface :
Type of progress:
Command line
URL
PHP
i should choose one from 3.
Command*:
placeholder to add my command to be excecuted
Excecute:
Dropdown
1.Cron style
2.Daily at 00:00
I use command line.
touch /tmp/test.txt
When i add
cron style and 1 * * * * * , get message for syntax.
and for
daily at 00:00 success message but on my notification of my site has
The admin/cli/cron.php script has not been run for 17 days 22 hours and should run every 1 min.
That's the icon of my Plesk Login. To change the path/to/moodle/admin/cli/cron.php xould i check the file manager in order to find cron.php file ??
As you add the cron from a web interface (maybe Plesk ?) you only need this part:
/path/to/moodle/admin/cli/cron.php
replace /path/to/moodle/ with the real path on your server, usulay something like /var/www/moodle/ or /var/www/vhosts/domain.name/httpdocs/moodle
To run every 1 minute cron style is :
* * * * *
If you can choose PHP version, choose the same version as the one you are using for Moodle.
Finally, I give this touch /tmp/test.txt ->and at cron style the touch /tmp/test.txt, as you suggest to me.
Because the other commands cannot complete with success.
But in my notifications,(Site administration-Moodle) i still get the message that cron don't run.
I have the following problem:
I am using the following command:
EXPORT TO "D:\ExportFiles\ACTIVATE_DICT.csv" OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES "D:\ExportFiles\FMessage.txt" SELECT * FROM DB2INST4.ACTIVATE_DICT;
In the Command Editor of the program, the Control Center successfully exported data from the ACTIVATE_DICT table to a CSV file ACTIVATE_DICT.csv.
But for a number of reasons, I need you to execute this command in the IBM Data Studio or DataGrip program, and there it cannot be executed in this form.
Therefore, I read the following manual enter link description here
and based on it wrote the following command:
CALL SYSPROC.ADMIN_CMD('EXPORT to /lotus/ExportFiles/ACTIVATE_DICT.csv OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES /lotus/ExportFiles/FMessage.txt SELECT * FROM DB2INST4.ACTIVATE_DICT');
Here is the message on the result of the command:
[2018-10-11 15:15:23] [ ][3107] There is at least one warning
message in the message file.. SQLCODE=3107, SQLSTATE= ,
DRIVER=4.23.42 [2018-10-11 15:15:23] 1 row retrieved starting from 1
in 75 ms (execution: 29 ms, fetching: 46 ms)
And in the / lotus / ExportFiles / directory there is no ACTIVATE_DICT.csv file and there is no FMessage.txt file in the / lotus / ExportFiles / directory.
Question: How then to correctly execute this command ??? Maybe I'm doing something wrong?
sqlcode 3107 is a warning message:
SQL3107W At least one warning message was encountered during LOAD processing.
Explanation
You can load data into a database from a file, tape, or named pipe using the LOAD command. You can specify that any warnings or errors from the LOAD processing be printed to a message file. If no message file is specified, the warnings or errors are printed to standard out (unless the database manager instance is configured as a partitioned-database environment.)
It is to tell you to read message log in the message file you specified. In your case: /lotus/ExportFiles/FMessage.txt
Please read into the file to see what error is logged and if you need help understand what is logged, please post the content of the file.
This message is returned when at least one warning was received during processing. If a message file is being used, the warnings and errors will be printed there.
This warning does not affect processing.
User response
Review the message file warning.
EXPORT command using the ADMIN_CMD procedure
See use of the 'MESSAGES ON SERVER' clause, and how to get these messages using the result set returned by this routine in this case.
I'm running this PerfView command:
PerfView.exe /Merge:true /zip:true /NoNGenRundown /NoClrRundown /KeepAllEvents /ThreadTime /DumpHeap /NoView /NoGui /MaxCollectSec:30 collect
but it seems that even if I defined /MaxCollectSec:30 to 30 seconds the actual data collection process is not stopping and keep adding data to PerfViewData.etl file
This is the output from console windows that Perfview open when running command:
VERBOSE LOG IN: PerfViewData.log.txt
EXECUTING: PerfView /Merge:true /zip:true /NoNGenRundown /NoClrRundown /KeepAllEvents /ThreadTime /DumpHeap /NoView /NoGui /MaxCollectSec:30 collect
Pre V4.0 .NET Rundown disabled, Type 'E' to enable symbols for V3.5 processes.
Do NOT close this console window. It will leave collection on!
Type S to stop collection, 'A' will abort.
Kernel Log: C:\PerfView\PerfViewData.kernel.etl
User mode Log: C:\PerfView\PerfViewData.etl
Starting collection at 12/07/2017 14:26:32
Collecting 10 sec: Size= 10.5 MB.
Collecting 20 sec: Size= 16.4 MB.
Exceeded MaxCollectSec 30
So here it is: Exceeded MaxCollectSec 30 but keep writing to etl files.
I want to send to client an Perfview command to collect system wide data and send me back the zip file with all ETL files from Perfview. Currently command does not stop - somebody know why ? What should I add/remove from command so it will stop automatically after 30 seconds ?
I know it's been a while, but it looks like the /DumpHeap switch is the problem here - if you remove it, the trace will finish on time. I checked the PerfView source code and when DumpHeap is selected there is some interaction with the GUI window:
if (parsedArgs.DumpHeap)
{
// Take a heap snapshot.
GuiHeapSnapshot(parsedArgs, true);
// Ensure that we clean up the heap snapshot state.
parsedArgs.DumpHeap = false;
}
You may create an issue in perfview describing your problem.
I am using powershell to run sqlplus and I would like PowerShell to detect if there are error after the script was run and to perform some action instead of me looking at the result file.
& 'sqlplus' 'system/myOraclePassword' '#Test' | out-file 'result.txt';
Normally in DOS, there is %errorlevel% when the command encounters error and I wonder if there is similar stuff in PowerShell?
Of course, I can read the log file myself but sometimes, thing got too routine and I may forget.
My Test.sql:
select level from dual
connect by level<5;
select 10/0 from dual;
quit;
There is clearly a division by zero error. The result.txt captures it but I would like powershell to detect it as well
SQL*Plus: Release 12.1.0.2.0 Production on Thu Apr 27 16:24:30 2017
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Thu Apr 27 2017 16:17:34 -04:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
LEVEL
----------
1
2
3
4
select 10/0 from dual
*
ERROR at line 1:
ORA-01476: divisor is equal to zero
Does the powershell statement return an errorlevel after the statement is executed like DOS?
I have tried:
& 'sqlplus' 'system/myOraclePassword' '#Test' | out-file 'result.txt';
if (errorlevel 1)
{ write-host error;
}
else
{ write-host ok;
}
But that has caused syntax error?
errorlevel : The term 'errorlevel' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.
What is a proper way to check error in powershell?
UPDATE
I used this:
if ($LASTEXITCODE -ne 0 )
{
write-host error;
}
else
{
write-host ok;
}
Since you are invoking an executable, you probably want to check for the $LASTEXITCODE variable or the return value of sqlplus. In PowerShell each variable has a $ prefix.
One certain problem is only reproducible on customer side.
We cannot reproduce it locally despite all our attempts.
But I know that TaskMgr in Windows 2008 R2 has a possibility to create dump file for a process. So, my question: is it possible to create dump on customer site for a certain process of our software and then investigate that dump file locally?
We already made a new build of our software (we saved a build sandbox and *.PDB files for all binaries). Then we installed that on site and now we are waiting when customer report that problem happens again so we will create a dump file for hanging process and then try to investigate it.
My question has 2 parts:
Would such method work at all?
If yes - how exactly to do that?
At the moment I have some doubt if that would work. Because I have tried to create a proof-test on my local Win 2008 R2 VM. I build all with .PDB files, then I run our software in a mode when it makes a long pause in the middle and I clicked "Create Dump File" in TaskMgr exactly when it does a pause (its simple call of Sleep(30000)). Then I tried to load that dump file in WinDbg and check what I could find there. First thing which makes me pessimistic about such way is a wrong stack trace. In particular - I cannot see a full stack trace in WinDbg. It shows me only stack trace for wow64.dll and ntdll.dll modules, I cannot see stack trace for our code.
In particular I see only this:
wow64cpu!TurboDispatchJumpAddressEnd+0x6c0
wow64cpu!TurboDispatchJumpAddressEnd+0x56b
wow64!Wow64SystemServiceEx+0x1ce
wow64!Wow64LdrpInitialize+0x42a
ntdll!RtlUniform+0x6e6
ntdll!RtlCreateTagHeap+0xa7
ntdll!LdrInitializeThunk+0xe
But when I try to attach process with debugger I see a full call-stack, like this:
ntdll.dll! 7754fd910
[Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]
ntdll.dll!7754fd9l0
KernelBase.dll! 76ae3bd50
KernelBase.dll! 76ae44a 5Q
ScrVm.DLL!Profiler::DoSleep(intmilliseconds=30000) Line 205
ScrVm.DLL!Script::VmToolKit::iMethod_Sleep(unsigned char & han
ScrVm.DLL!CComponent::Invoke(const _SU::basic_string<char,std
ScrVm.DLL!Script::VirtualMachine::do_Invoke(Script::VmCommand
ScrVm.DLL!Script::VirtualMachine::InnerLoop( Line 4471
ScrVm.DLL!Script::VirtualMachine::Execute(unsigned long hFunc=
ScrVm.DLL!ScriptProcessor::Run(const _SU::basic_string<char,st
ScrVm.DLL!ScriptProcessor::ProcessDocumentO Line 285 + 0x40 by
ScrVm.DLL!DocumentProcessor::Process(BinaryDOM::Document * pDo
ScrVm.DLL!CFuncExecScript::ExecuteO Line 219
ScrVm.DLL!SrvManager::ExecuteO Line 586 +0xldbytes
ScrVm.DLL!SrvManager::Run(tag_TReqHdr "pRequestBuf=0x00187
ScrVm.DLL!SrvManager::HandleRequest(tag_TReqHdr " pRequest
ScrVm.DLL!SrvProcessRequest(tag_TReqHdr * pRequesffiuf=0x0
ScrVm.DLL!ProcessRequest(char "pRequesffiuf=0x001873b6, char "
ScrVm.DLL!ProcessRequest_DLL(char " achMsg=0x001873b6, char "a
siteExec212.exe!00409b2d0
siteExec212.exe!0040a4cfO
As you can see WinDbg seems only showing last 7 items in stack which are useless for me. Question - is it possible to discover the full stack trace from dump file created in TaskMgr in Windows 7/2008? Or at least - I need more items in stack trace, to see from what place in our code this call was made.
Note: compiler MS VisualStudio 2008, WinDbg 6.12 x64.
Since your process is 32 bit you must use the 32 bit version of Task Manager to create the dump. Default installs have it in C:\Windows\SysWow64\taskmgr.exe
Also, make sure to use the 32 bit version of windbg.