TAP results do not show up during execution time, if a TestClassSetup is present - matlab

I have a problem. We use the Matlab testing framework to analyze our codebase. To track the results in our CI system TeamCity we use the TAP-format. Here we have the following problem:
If a test includes a TestClassSetup section, the TAP results show up only at the end, and not during the exection. This results in a few issues for us:
Timestamps created by the CI system might not be correct
If informative output is given within a test-case, it is not shown together with the assertion statment.
We use the following (simplified) snippet to identify out TestSuite and execute it:
testSuite = matlab.unittest.TestSuite.fromFolder('.');
runner = matlab.unittest.TestRunner.withNoPlugins();
runner.addPlugin(matlab.unittest.plugins.TAPPlugin.producingOriginalFormat());
results = runner.run(testSuite);
With the following two classes the issue is reproducible (the content is of course made up & meaningless...):
classdef SomeTest < matlab.unittest.TestCase
properties (TestParameter)
param = {1, 2};
param2 = {1, 2};
end
methods (TestClassSetup)
function someSetup(testCase)
pause(0.1);
end
end
methods (Test)
function testMethod(self, param, param2)
fprintf('I''m here, with the params: %f/%f\n', param, param2);
pause(0.1);
self.assertGreaterThan(param, param2);
end
end
end
classdef SomeOtherTest < matlab.unittest.TestCase
properties (TestParameter)
param = {1, 2};
param2 = {1, 2};
end
methods (Test)
function testMethod(self, param, param2)
fprintf('I''m here, with the params: %f/%f\n', param, param2);
pause(0.1);
self.assertGreaterThan(param, param2);
end
end
end
If you copy all three files into one folder, and execute the runner, you'll see the output (assertions are simplified):
1..8
I'm here, with the params: 1.000000/1.000000
not ok 1 - SomeOtherTest/testMethod(param=1,param2=1)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=1,param2=1) and it did not run to completion.
# ================================================================================
#
I'm here, with the params: 1.000000/2.000000
not ok 2 - SomeOtherTest/testMethod(param=1,param2=2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=1,param2=2) and it did not run to completion.
# ================================================================================
#
I'm here, with the params: 2.000000/1.000000
ok 3 - SomeOtherTest/testMethod(param=2,param2=1)
I'm here, with the params: 2.000000/2.000000
not ok 4 - SomeOtherTest/testMethod(param=2,param2=2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=2,param2=2) and it did not run to completion.
# ================================================================================
#
I'm here, with the params: 1.000000/1.000000
I'm here, with the params: 1.000000/2.000000
I'm here, with the params: 2.000000/1.000000
I'm here, with the params: 2.000000/2.000000
not ok 5 - SomeTest/testMethod(param=1,param2=1)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=1,param2=1) and it did not run to completion.
# ================================================================================
#
not ok 6 - SomeTest/testMethod(param=1,param2=2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=1,param2=2) and it did not run to completion.
# ================================================================================
#
ok 7 - SomeTest/testMethod(param=2,param2=1)
not ok 8 - SomeTest/testMethod(param=2,param2=2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=2,param2=2) and it did not run to completion.
# ================================================================================
What I would expect is that also in the second case the Assertion statements (and the ok / not ok TAP flags) are aligned with the fprintf-statements.
Has anyone an idea?

The reason the presence of TestClassSetup "defers" the printing of the TAP output is because the TAP output is a streaming format and if there is any TestClassSetup code the frame work actually does not yet know whether the tests will pass or not. For example, if you have a failure in TestClassTeardown (or through an addTeardown function call in TestClassSetup), the end result is that all the tests that shared the TestClassSetup code will fail.
However, given its a streaming format the TAPPLugin wants to print out the result as soon as it knows the result. There is actually a TestRunnerPlugin method specifically designed for this case, the reportFinalizedResult method.
The fundamental issue here is that I would recommend you avoid printing to the log using disp or fprintf. This is less ideal because the plugins don't have any insight into any of the information printed using fprintf. Also, you can't redirect this information anywhere other than the matlab command line.
However, if you instead using the testCase.log method you will get the diagnostics in the right place and it will be more flexible. You will be able to log it at different levels so you can turn it on or off as you please and control whether you want to see it. It will also not only go to the command line but will go much more nicely into the TAP stream as well as the junit xml and the pdf/html test reports and so on. For your case it looks like the following:
runner = matlab.unittest.TestRunner.withNoPlugins();
runner.addPlugin(matlab.unittest.plugins.TAPPlugin.producingOriginalFormat());
results = runner.run(testSuite);
First you run and you don't see any of the log calls because it was logged at verbosity "3" and the default is lower (level 1 I believe)
1..8
not ok 1 - SomeOtherTest/testMethod(param=value1,param2=value1)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 2 - SomeOtherTest/testMethod(param=value1,param2=value2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 3 - SomeOtherTest/testMethod(param=value2,param2=value1)
not ok 4 - SomeOtherTest/testMethod(param=value2,param2=value2)
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
not ok 5 - SomeTest/testMethod(param=value1,param2=value1)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 6 - SomeTest/testMethod(param=value1,param2=value2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 7 - SomeTest/testMethod(param=value2,param2=value1)
not ok 8 - SomeTest/testMethod(param=value2,param2=value2)
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
However, if you configure the tap plugin (or the version 13 tap plugin or the report plugin etc) to log at level threee then you see these diagnostics and they are at the expected location as well:
runner = matlab.unittest.TestRunner.withNoPlugins();
runner.addPlugin(matlab.unittest.plugins.TAPPlugin.producingOriginalFormat('Verbosity', 3));
results = runner.run(testSuite);
You see the output. Also try the TAPVersion 13, the structured output that provides might provide an even better result.
1..8
not ok 1 - SomeOtherTest/testMethod(param=value1,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:18): I'm here, with the params: 1.000000/1.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 2 - SomeOtherTest/testMethod(param=value1,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 1.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 3 - SomeOtherTest/testMethod(param=value2,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 2.000000/1.000000
# ================================================================================
not ok 4 - SomeOtherTest/testMethod(param=value2,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 2.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeOtherTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
not ok 5 - SomeTest/testMethod(param=value1,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 1.000000/1.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value1) and it did not run to completion.
# ================================================================================
not ok 6 - SomeTest/testMethod(param=value1,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:19): I'm here, with the params: 1.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value1,param2=value2) and it did not run to completion.
# ================================================================================
ok 7 - SomeTest/testMethod(param=value2,param2=value1)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:20): I'm here, with the params: 2.000000/1.000000
# ================================================================================
not ok 8 - SomeTest/testMethod(param=value2,param2=value2)
# ================================================================================
# [Detailed] Diagnostic logged (2018-08-09 16:47:20): I'm here, with the params: 2.000000/2.000000
# ================================================================================
# ================================================================================
# Assertion failed in SomeTest/testMethod(param=value2,param2=value2) and it did not run to completion.
# ================================================================================
Hope that helps!

Related

running unit tests of schematics/scully

After having forked scully, running npm test from the root folder launches jest tests but doesn't launch the unit tests with jasmine under ./schematics/scully.
When running npm test from under ./schematics/scully, it doesn't work:
git clone git#github.com:atao60/scully.git
cd scully/schematics/scully
npm i
npm i -D guess-parser#^0.4.13 #angular/core#"^8.0.0 || ^9.0.0-0" #angular/common#"^8.0.0 || ^9.0.0-0"
npm i -D bufferutil#^4.0.1 utf-8-validate#^5.0.2 zone.js#~0.10.3 typescript#"~3.7.5 || ~3.8.0"
npm test
#
# > #scullyio/init#0.0.26 test /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully
# > npm run build && jasmine src/**/*_spec.js
#
#
# > #scullyio/init#0.0.26 build /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully
# > tsc -p tsconfig.json
#
# Randomized with seed 75077
# Started
# FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
#
# Failures:
# 1) add-post when using `metaDataFile` option should add the meta data but keep title from options
# Message:
# Error: Cannot find module '#schematics/angular/package.json'
# Require stack:
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/node_modules/#angular-devkit/schematics/tools/node-module-engine-host.js
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/node_modules/#angular-devkit/schematics/tools/workflow/node-workflow.js
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/node_modules/#angular-devkit/schematics/tools/index.js
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/node_modules/#angular-devkit/schematics/testing/schematic-test-runner.js
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/node_modules/#angular-devkit/schematics/testing/index.js
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/src/add-blog/index_spec.js
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/node_modules/jasmine/lib/jasmine.js
# - /home/pierre/DevSpace/gh-pages-explo/scully/schematics/scully/node_modules/jasmine/bin/jasmine.js
# Stack:
# error properties: Object({ code: 'MODULE_NOT_FOUND', requireStack: [ '/home/pierre/DevSpace/
# [...]
#
# 33 specs, 33 failures
# Finished in 0.465 seconds
# Randomized with seed 75077 (jasmine --random=true --seed=75077)
# npm ERR! Test failed. See above for more details.
It seems I missed something obvious but I can't figure out what. Any clue'd be welcome.

Error executing action `create` on resource 'cookbook_file

Getting an error while executing action 'create' on a resource. I am running the recipe in --local-mode not sure if that is the problem.
I am just trying to run it locally first instead of running it on a node.
Pasting my file and my output in here :
[2019-04-09T19:25:51+00:00] WARN: Node rheaj has an empty run list.
Converging 4 resources
Recipe: #recipe_files::/home/rheaj/chef/cookbooks/oc_jumpbox/recipes/default.rb
* cookbook_file[/etc/motd] action create[2019-04-09T19:25:51+00:00] INFO: Processing cookbook_file[/etc/motd] action create (#recipe_files::/home/rheaj/chef/cookbooks/oc_jumpbox/recipes/default.rb line 9)
================================================================================
Error executing action `create` on resource 'cookbook_file[/etc/motd]'
================================================================================
Chef::Exceptions::CookbookNotFound
----------------------------------
Cookbook #recipe_files not found. If you're loading #recipe_files from another cookbook, make sure you configure the dependency in your metadata
Resource Declaration:
---------------------
# In /home/rheaj/chef/cookbooks/oc_jumpbox/recipes/default.rb
9: cookbook_file '/etc/motd' do
10: group 'root'
11: user 'root'
12: mode '0644'
13: source 'motd'
14: end
15:
Default location for your cookbook_files is in files/default. The error you get is that your cookbook file is not there. Can you create that file files/default/motd and re run your chef-client.

Chef MongoDB Replication with sc-mongodb

I am new to chef and I'm using sc-mongodb, and I can't get this to work. Is there a better way of doing replication for MongoDB with chef?
I was able to get the default recipe working
include_recipe "sc-mongodb::default"
But when I tried to do replication for mongo, I started getting some weird errors.
include_recipe "sc-mongodb::replicaset"
Error:
================================================================================
Recipe Compile Error in /tmp/kitchen/cache/cookbooks/c_mongo/recipes/default.rb
================================================================================
Net::HTTPServerException
------------------------
400 "Bad Request"
Cookbook Trace:
---------------
/tmp/kitchen/cache/cookbooks/sc-mongodb/definitions/mongodb.rb:236:in `block in from_file'
/tmp/kitchen/cache/cookbooks/sc-mongodb/recipes/replicaset.rb:36:in `from_file'
/tmp/kitchen/cache/cookbooks/c_mongo/recipes/default.rb:54:in `from_file'
Relevant File Content:
----------------------
/tmp/kitchen/cache/cookbooks/sc-mongodb/definitions/mongodb.rb:
229: notifies :run, 'ruby_block[config_sharding]', :immediately if new_resource.is_mongos && new_resource.auto_configure_sharding
230: # we don't care about a running mongodb service in these cases, all we need is stopping it
231: ignore_failure true if new_resource.name == 'mongodb'
232: end
233:
234: # replicaset
235: if new_resource.is_replicaset && new_resource.auto_configure_replicaset
236>> rs_nodes = search(
237: :node,
238: "mongodb_cluster_name:#{new_resource.cluster_name} AND "\
239: 'mongodb_is_replicaset:true AND '\
240: "mongodb_config_mongod_replication_replSetName:#{new_resource.replicaset_name} AND "\
241: "chef_environment:#{node.chef_environment}"
242: )
243:
244: ruby_block 'config_replicaset' do
245: block do
System Info:
------------
chef_version=13.8.5
platform=centos
platform_version=7.4.1708
ruby=ruby 2.4.3p205 (2017-12-14 revision 61247) [x86_64-linux]
program_name=chef-client worker: ppid=28997;start=00:31:33;
executable=/opt/chef/bin/chef-client
Running handlers:
[2018-03-27T00:31:35+00:00] ERROR: Running exception handlers
Running handlers complete
[2018-03-27T00:31:35+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 01 seconds
[2018-03-27T00:31:35+00:00] FATAL: Stacktrace dumped to /tmp/kitchen/cache/chef-stacktrace.out
[2018-03-27T00:31:35+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2018-03-27T00:31:35+00:00] ERROR: 400 "Bad Request"
[2018-03-27T00:31:35+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
I have tried so many ways to resolve this problem, looking at the issues from the github repository. From the errors, it looks like the attributes aren't getting set, so people are setting them manually. :
# attempt1------------
#node.default['mongodb']['config']['replSet'] = true
#node.default[:mongodb][:cluster_name] = "repl-name"
#include_recipe "sc-mongodb::replicaset"
# attempt2----------
#node.normal['mongodb']['install_method'] = 'mongodb-org'
#node.normal['mongodb']['config']['bind_ip'] = '0.0.0.0'
#node.normal['mongodb']['dbconfig_file'] = '/etc/mongod.conf'
#node.normal['mongodb']['config']['replSet'] = true
#node.normal['mongodb']['is_replicaset'] = true
#node.normal['mongodb']['cluster_name'] = 'scribe'
#node.normal['mongodb']['replSet'] = 'scribe'
#node.normal['mongodb']['is_shard'] = false
#include_recipe "sc-mongodb::replicaset"
#attempt3------------
#node.default[:mongodb][:cluster_name] = "cluster_name"
#include_recipe "sc-mongodb::replicaset"
#attempt4------------
#if node['mongodb']['config']['replSet'].nil?
# node.default['mongodb']['config']['replSet'] = "repl-name"
#end
#include_recipe "sc-mongodb::replicaset"
#attempt5-------------
#https://github.com/sous-chefs/mongodb/issues/167
#node.default['mongodb']['config']['mongod']['replication']['replSetName'] = "rs-name"
#include_recipe "sc-mongodb::replicaset"
This one gives me a different error:
#attempt6-----------
node.default['mongodb']['config']['mongod']['replication']['replSetName']= 'rs_default'
node.default['mongodb']['cluster_name'] = 'cluster'
node.default['mongodb']['auto_configure']['replicaset'] = true
include_recipe "sc-mongodb::replicaset"
Stacktrace:
================================================================================
Error executing action `run` on resource 'ruby_block[config_replicaset]'
================================================================================
NoMethodError
-------------
undefined method `[]' for nil:NilClass
Cookbook Trace:
---------------
/tmp/kitchen/cache/cookbooks/sc-mongodb/libraries/mongodb.rb:74:in `configure_replicaset'
/tmp/kitchen/cache/cookbooks/sc-mongodb/definitions/mongodb.rb:246:in `block (3 levels) in from_file'
Resource Declaration:
---------------------
# In /tmp/kitchen/cache/cookbooks/sc-mongodb/definitions/mongodb.rb
244: ruby_block 'config_replicaset' do
245: block do
246: MongoDB.configure_replicaset(node, replicaset_name, rs_nodes) unless new_resource.replicaset.nil?
247: end
248: action :nothing
249: end
250:
Compiled Resource:
------------------
# Declared in /tmp/kitchen/cache/cookbooks/sc-mongodb/definitions/mongodb.rb:244:in `block in from_file'
ruby_block("config_replicaset") do
params {:mongodb_type=>"mongod", :action=>[:enable, :start], :logpath=>"/var/log/mongodb/mongod.log", :configservers=>[], :replicaset=>true, :notifies=>[], :not_if=>[], :name=>"mongod"}
action [:nothing]
retries 0
retry_delay 2
default_guard_interpreter :default
block_name "config_replicaset"
declared_type :ruby_block
cookbook_name "sc-mongodb"
recipe_name "replicaset"
block #<Proc:0x00000003ebdec8#/tmp/kitchen/cache/cookbooks/sc-mongodb/definitions/mongodb.rb:245>
end
Platform:
---------
x86_64-linux
I've had a lot of trouble with this cookbook, you're not alone.
From what I've gathered, you need to run this cookbook multiple times and/or in different configurations depending on what you are trying to achieve or what state your node is in. For example, I believe the auto_configure attribute should only be set for the last node in the set after the others have been cheffed with that set to false. Similarly for their user recipe, mongodb only allows admin coll operations on the primary and so you should ensure this recipe is executed on the designated primary node.
Unfortunately the documentation is not clear and for someone like me new to Chef and Ruby, the src and errors are tricky to interpret. I am still in the process of figuring out this cookbook and can report back if I have something concrete to give you, have you been able to get this working since your post? Sorry I can't be of much more help, you will have to try configurations out with test-kitchen VMs.

PyTest Suppress Results Debug Statement

I am using PyTest with the following options: -s, -v, and --resultlog=results.txt. This suppresses print statements from my test, but prints the test names and results as they are run and logs the results to results.txt.
However, if any tests fail, I also get a spew of information containing traceback, debug, etc. Since I am logging this to a file anyway, I don't want it printed to the screen, cluttering up my output.
Is there any way to disable the printing of just these debug statements, but still have it logged to my results file?
Visual example:
Currently, I see something like this:
$ py.test -sv --resultlog=results.txt test.py
=============================== test session starts =========================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- /...
cachedir: .cache
rootdir: /Users/jdinkel/Documents, inifile:
plugins: profiling-1.1.1, session2file-0.1.9
collected 3 items
test.py::TestClass::test1 PASSED
test.py::TestClass::test2 PASSED
test.py::TestClass::test3 FAILED
===================================== FAILURES ==============================
__________________________________ TestClass.test3 __________________________
self = <test.TestClass instance at 0x10beb5320>
def test3(self):
> assert 0
E assert 0
test.py:7: AssertionError
========================== 1 failed, 2 passed in 0.01 seconds ===============
But I would like to see this:
$ py.test -sv --resultlog=results.txt test.py
=============================== test session starts =========================
platform darwin -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1 -- /...
cachedir: .cache
rootdir: /Users/jdinkel/Documents, inifile:
plugins: profiling-1.1.1, session2file-0.1.9
collected 3 items
test.py::TestClass::test1 PASSED
test.py::TestClass::test2 PASSED
test.py::TestClass::test3 FAILED
========================== 1 failed, 2 passed in 0.01 seconds ===============
With no change to the results.txt file.
You should use tb switch for controlling traceback.
e.g.
pytest tests/ -sv --tb=no --disable-warnings
--disable-warnings disable occasional pytest warnings which I assume you don't want either.
From pytest help:
--tb=style traceback print mode (auto/long/short/line/native/no).
In addition to the answer of #SilentGuy, -r N suppresses the summary of failed testcases.

0xC0000022 before RtlUserThreadStart

I'm injecting some code to hook apis in processes but I have some issues in some applications like chrome.exe
My test app launches a suspended process, do injection and api hooking and then resumes it.
CreateProcessW is hooked in order to be able to hook child processes. If CreateProcessW is called, it is forced to be created suspended, hook the child and resume it.
The injected code only depends on ntdll api's so, although hooked processes are not fully initialized yet, ntdll.dll is always present.
Code is injected using a helper thread using CreateRemoteThread or NtCreateThreadEx with the CREATE_SUSPENDED flag. (No matter which one, the issue still there)
After this intro, the problem is that in some processes like some chrome childs, CreateRemoteThread returns TRUE but when I resume the injector thread, it exits with code 0xC0000022 and the process exits too.
If I attach WinDbg to a chrome.exe child process that is suspended, before I do anything, it fails too and chrome.exe ends with the same behavior.
Seems O.S. code executed before RtlUserThreadStart, generates the error but I don't know how to debug it.
How can I debug code that runs before RtlUserThreadStart? Is there a debugger or a windbg option that allows me to do that?
EDIT:
Following the last post from here, I could retrieve this info:
0a88:0814 # 02688302 - LdrpInitializeProcess - INFO: Beginning execution of chrome.exe (c:\Program Files (x86)\Google\Chrome\Application\chrome.exe)
Current directory: C:\Windows
Search path: C:\Windows\SYSTEM32 0a88:0814 # 02688318 - LdrpInitializeProcess - ERROR: Initializing the current directory to "C:\Windows" failed with status 0xc0000022
0a88:0814 # 02688334 - LdrLoadDll - ENTER: DLL name: C:\Windows\SYSTEM32\wow64.dll DLL path: NULL 0a88:0814 # 02688349 - LdrpLoadDll - ENTER: DLL name: C:\Windows\SYSTEM32\wow64.dll DLL path: C:\Windows\SYSTEM32
0a88:0814 # 02688365 - LdrpLoadDll - INFO: Loading DLL C:\Windows\SYSTEM32\wow64.dll from path C:\Windows\SYSTEM32 0a88:0814 # 02688380 - LdrpFindOrMapDll - ENTER: DLL name: C:\Windows\SYSTEM32\wow64.dll DLL path: C:\Windows\SYSTEM32
0a88:0814 # 02688396 - LdrpSearchPath - ENTER: DLL name: C:\Windows\SYSTEM32\wow64.dll DLL path: C:\Windows\SYSTEM32
0a88:0814 # 02688412 - LdrpResolveFileName - ENTER: DLL name: C:\Windows\SYSTEM32\wow64.dll
0a88:0814 # 02688427 - LdrpResolveFileName - RETURN: Status: 0xc0000022
0a88:0814 # 02688443 - LdrpSearchPath - RETURN: Status: 0xc0000022
0a88:0814 # 02688458 - LdrpFindOrMapDll - RETURN: Status: 0xc0000022
0a88:0814 # 02688474 - LdrpLoadDll - RETURN: Status: 0xc0000022
0a88:0814 # 02688490 - LdrLoadDll - RETURN: Status: 0xc0000022
0a88:0814 # 02688505 - LdrpInitializeProcess - ERROR: Loading WOW64 image management DLL "C:\Windows\SYSTEM32\wow64.dll" failed with status 0xc0000022
0a88:0814 # 02688521 - _LdrpInitialize - ERROR: Process initialization failed with status 0xc0000022
0a88:0814 # 02688536 - LdrpInitializationFailure - ERROR: Process initialization failed with status 0xc0000022
The process is created with a restricted token, the main thread inherits it but my injector thread isn't restricted because it is created by my app.
I can assume ntdll's apis are not hooked yet by chrome (in this case) because injection takes place before CreateProcess returns to chrome.
May the non-restricted token in my thread conflicts with process token in some way?
Take a look at Debugging WinLogon in the windbg help (debugger.chm). Simply substitute "chrome.exe" for "winlogon.exe". This technique controls a user mode debugger (ntsd) from the kernel mode debugger. I believe this will allow you debug chrome.exe's process initialization much earlier than using a user mode debugger alone.
The issue in chrome was the following:
Chrome launches child processes with very limited privileges (because of the sandbox) but before resuming the main thread it impersonates the main thread with a token with more privileges in order to let the process initialize.
My injector thread was not impersonating so the limited process token raised the 0xC0000022 exit code when the LdrpInitializeProcess routine was executed.