I have a big error with pouchDB communicating to my Cloudant database in a angular/ionic app.
Can you please help me figure out how to fix this ?
POST https://louisromain.cloudant.com/boardline_users/_bulk_get?revs=true&attachments=true&_nonce=1446478625328 400 (Bad Request)
pouchdb.min.js:8 Database has a global failure DOMError {}message: ""name: "QuotaExceededError"__proto__: DOMErrora.8.G.onsuccess.H.onabort # pouchdb.min.js:8
ionic.bundle.min.js:139 o {status: 500, name: "abort", message: "unknown", error: true, reason: "QuotaExceededError"}error: truemessage: "unknown"name: "abort"reason: "QuotaExceededError"result: Objectdoc_write_failures: 1docs_read: 1docs_written: 0end_time: Mon Nov 02 2015 16:37:05 GMT+0100 (CET)errors: Array[1]last_seq: "3478-g1AAAAFJeJzLYWBgYMlgTmGQT0lKzi9KdUhJMtXLSs1LLUst0kvOyS9NScwr0ctLLckBKmRKZEiy____f1YGUxIDA3N6LlCMPdXM1MzEMo1oM5IcgGRSPcKYcLAxKZYGlslpSajGmOA2Jo8FSDI0ACmgSftRXJSSamFoYWmOapQ5IaMOQIwCuooZZFQhxHPmJkCURtigLAAxFGUZ"ok: falsestart_time: Mon Nov 02 2015 16:36:59 GMT+0100 (CET)status: "aborting"__proto__: Objectstatus: 500__proto__: r(anonymous function) # ionic.bundle.min.js:139b.$get # ionic.bundle.min.js:111(anonymous function) # ionic.bundle.min.js:151a.$get.n.$eval # ionic.bundle.min.js:165a.$get.n.$digest # ionic.bundle.min.js:163(anonymous function) # ionic.bundle.min.js:166e # ionic.bundle.min.js:74(anonymous function) # ionic.bundle.min.js:76
11ionic.bundle.min.js:139 Error: Failed to execute 'transaction' on 'IDBDatabase': The database connection is closing.
at Error (native)
at a.9.n.openTransactionSafely (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:8:9233)
at i.a.8.e._getLocal (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:8:2521)
at i.<anonymous> (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:7:6737)
at i.<anonymous> (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:10:28092)
at i.a.90.t.exports (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:10:28931)
at http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:9:28802
at i.<anonymous> (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:9:28722)
at i.a.90.t.exports [as get] (http://localhost:8101/lib/pouchdb/dist/pouchdb.min.js:10:28931)
at i.angular.module.constant.service.$q.qify [as get] (http://localhost:8101/lib/angular-pouchdb/angular-pouchdb.js:35:27)(anonymous function) # ionic.bundle.min.js:139b.$get # ionic.bundle.min.js:111(anonymous function) # ionic.bundle.min.js:151a.$get.n.$eval # ionic.bundle.min.js:165a.$get.n.$digest # ionic.bundle.min.js:163(anonymous function) # ionic.bundle.min.js:166e # ionic.bundle.min.js:74(anonymous function) # ionic.bundle.min.js:76
The error is that the device has run out of space. Unfortunately this is an error thrown by IndexedDB itself when the device is too low on storage, so there's nothing you can do about it except to use less space. PouchDB's compact() can help; there's also the transform-pouch plugin if you want to just reduce the size of your documents.
Related
I have built a classification model on my local machine and now for deployment I am using Azure Machine Learning.
I have registered my model on AzureML.
Now while deploying or trying to expose web service I am facing issues with the docker image creation.
wenv= CondaDependencies()
wenv.add_conda_package("scikit-learn")
with open("wenv.yml", "w") as f:
f.write(wenv.serialize_to_string())
with open("wenv.yml","r") as f:
print(f.read())
image_config =ContainerImage.image_configuration(execution_script="scorete.py",
runtime="python",
conda_file="wenv.yml")
#Expose Web Service
service_name = 'telecoinference'
service =Webservice.deploy_from_model(workspace= ws,
name= service_name,
deployment_config=aciconfig,
models=\[model\],
image_config=image_config)
service.wait_for_deployment(show_output=True)
print(service.state)
WebserviceException Traceback (most recent call last)
\<ipython-input-50-cbddf70eccff\> in \<module\>
7 deployment_config=aciconfig,
8 models=\[model\],
\----\> 9 image_config=image_config)
10 service.wait_for_deployment(show_output=True)
11 print(service.state)
\~\\AppData\\Roaming\\Python\\Python36\\site-packages\\azureml\\core\\webservice\\webservice.py in deploy_from_model(workspace, name, models, image_config, deployment_config, deployment_target, overwrite)
450
451 image = Image.create(workspace, name, models, image_config)
\--\> 452 image.wait_for_creation(True)
453 if image.creation_state != 'Succeeded':
454 raise WebserviceException('Error occurred creating image {} for service. More information can be found '
\~\\AppData\\Roaming\\Python\\Python36\\site-packages\\azureml\\core\\image\\image.py in wait_for_creation(self, show_output)
452 'current state: {}\\n'
453 'Error response from server:\\n'
\--\> 454 '{}'.format(self.creation_state, error_response), logger=module_logger)
455
456 print('Image creation operation finished for image {}, operation "{}"'.format(self.id, operation_state))
WebserviceException: WebserviceException:
Message: Image creation polling reached non-successful terminal state, current state: Failed
Error response from server:
StatusCode: 400
Message: Docker image build failed.
InnerException None
ErrorResponse
{
"error": {
"message": "Image creation polling reached non-successful terminal state, current state: Failed\\nError response from server:\\nStatusCode: 400\\nMessage: Docker image build failed."
}
}`
I am writing a protractor test for login for an AngularJS app and want to verify that the login is successful and the url changes after login. I tried to use Expected condition with urlContains() and also tried with browser.getCurrentUrl().toContain() but I am getting error in both.
exports.config = {
seleniumAddress : 'http://localhost:4444/wd/hub',
specs: ['login.spec.js'],
};
Expected condition passes the test when the url is correct. But when the url is different then it throws timeout error
"Failed: Wait timed out after 5013ms".
expect(browser.getCurrentUrl()).toContain('/dashboard') fails always with below error
Stack:
ScriptTimeoutError: script timeout: result was not received in 11 seconds
(Session info: chrome=75.0.3770.142)
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:25:53'
System info: os.name: 'Mac OS X', os.arch: 'x86_64', os.version: '10.14.3', java.version: '12.0.1'
Driver info: driver.version: unknown
at Object.checkLegacyResponse (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/error.js:546:15)
at parseHttpResponse (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/http.js:509:13)
at doSend.then.response (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/http.js:441:30)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
From: Task: Protractor.waitForAngular()
at thenableWebDriverProxy.schedule (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/webdriver.js:807:17)
at ProtractorBrowser.executeAsyncScript_ (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/built/browser.js:425:28)
at angularAppRoot.then (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/built/browser.js:456:33)
at ManagedPromise.invokeCallback_ (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:1376:14)
at TaskQueue.execute_ (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:3084:14)
at TaskQueue.executeNext_ (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:3067:27)
at asyncRun (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:2927:27)
at /Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/selenium-webdriver/lib/promise.js:668:7
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
From: Task: Run it("should login successfully") in control flow
at UserContext.<anonymous> (/Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/jasminewd2/index.js:94:19)
From asynchronous test:
Error
at Object.<anonymous> (/Users/ProtractorTest/Tests/login.spec.js:17:3)
at Module._compile (module.js:643:30)
at Object.Module._extensions..js (module.js:654:10)
at Module.load (module.js:556:32)
at tryModuleLoad (module.js:499:12)
at Function.Module._load (module.js:491:3)
at Module.require (module.js:587:17)
at require (internal/module.js:11:18)
at /Users/.nvm/versions/node/v8.9.4/lib/node_modules/protractor/node_modules/jasmine/lib/jasmine.js:93:5
Below is my code
it('should login successfully', function () {
browser.get("https://example.com/");
loginobj.username.sendKeys(logindata.email);
loginobj.password.sendKeys(logindata.password);
loginobj.loginbtn.click().then(function(){
browser.getCurrentUrl().then(url => expect(url).toContain('/dashboard'));
//var EC = protractor.ExpectedConditions;
//browser.wait(EC.urlContains('/dashboard'), 5000);
})
I expect that when the url is different than the expected one, it should display a valid error message instead of timeout error.
By default protractor handles all the asynchrony for you. Looking at your code you are relying on default protractor behaviour i.e. not setting SELENIUM_PROMISE_MANAGER to false.
In that case, why do you want to do something inside click().then() ? It can be as simple and plain as
loginobj.loginbtn.click();
expect(browser.getCurrentUrl()).toContain('/dashboard');
One theory with your code: once you have something inside click().then(), its out of place from the promise queue that protractor is handling for you. Unless it is absolutely necessary, for ex get value from an element for later use in the spec, I would suggest not to meddle with protractor asynchronous handling as much as possible.
Hope that helps.
I have something similar
const currentUrl = await browser.getCurrentUrl().then(url => url);
expect(currentUrl).toContain('/dashboard')
Try it out maybe it will help, just without await as i see you don't use async functions
or like this
await browser.getCurrentUrl().then(url => expect(url).toContain('/dashboard'));
In protractor default script time out is 11 seconds,
In above code snippet,
browser.getCurrentUrl().then(url => expect(url).toContain('/dashboard'));
statement takes more than 11 seconds to resolve promise.
Solution: In Protractor configuration file, add below statements
allScriptsTimeout: timeout_in_millis.
e.g for 30 second timeout
allScriptsTimeout: 30000
Edited Configuration File:
exports.config = {
allScriptsTimeout: 30000,
seleniumAddress : 'http://localhost:4444/wd/hub',
specs: ['login.spec.js'],
};
I already have a schema of users with authentication-key and wanted to do authentication via that. I tried implementing authentication via sql but due to different structure of my schema I was getting error and so I implemented external-authentication method. The technologies and OS used in my application are :
Node.JS
Ejabberd as XMPP server
MySQL Database
React-Native (Front-End)
OS - Ubuntu 18.04
I implemented the external authentication configuration as mentioned in https://docs.ejabberd.im/admin/configuration/#external-script and took php script https://www.ejabberd.im/files/efiles/check_mysql.php.txt as an example. But I am getting the below mentioned error in error.log. In ejabberd.yml I have done following configuration.
...
host_config:
"example.org.co":
auth_method: [external]
extauth_program: "/usr/local/etc/ejabberd/JabberAuth.class.php"
auth_use_cache: false
...
Also, is there any external auth javascript script?
Here is the error.log and ejabberd.log as mentioned below
error.log
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
ejabberd.log
2019-03-19 07:19:16.811 [debug] <0.524.0>#ejabberd_http:init:151 S:
[{[<<"api">>],mod_http_api},{[<<"admin">>],ejabberd_web_admin}]
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process_header:307 (#Port<0.13811>) http
query: 'POST' <<"/api/register">>
2019-03-19 07:19:16.811 [debug]
<0.524.0>#ejabberd_http:process:394 [<<"api">>,<<"register">>] matches
[<<"api">>]
2019-03-19 07:19:16.811 [info]
<0.364.0>#ejabberd_listener:accept:238 (<0.524.0>) Accepted connection
::ffff:ip -> ::ffff:ip
2019-03-19 07:19:16.814 [info]
<0.524.0>#mod_http_api:log:548 API call register
[{<<"user">>,<<"test">>},{<<"host">>,<<"example.org.co">>},{<<"password">>,<<"test">>}]
from ::ffff:ip
2019-03-19 07:19:16.814 [error]
<0.524.0>#ejabberd_auth_external:failure:103 External authentication
program failed when calling 'check_password' for admin#example.org.co:
disconnected
2019-03-19 07:19:16.814 [debug]
<0.524.0>#mod_http_api:extract_auth:171 Invalid auth data:
{error,invalid_auth}
Any help regarding this topic will be appreciated.
1) Your config about the auth_method looks good.
2) Here is a python script I've used and upgraded to make an external authentication for ejabberd.
#!/usr/bin/python
import sys
from struct import *
import os
def openAuth(args):
(user, server, password) = args
# Implement your interactions with your service / database
# Return True or False
return True
def openIsuser(args):
(user, server) = args
# Implement your interactions with your service / database
# Return True or False
return True
def loop():
switcher = {
"auth": openAuth,
"isuser": openIsuser,
"setpass": lambda(none): True,
"tryregister": lambda(none): False,
"removeuser": lambda(none): False,
"removeuser3": lambda(none): False,
}
data = from_ejabberd()
to_ejabberd(switcher.get(data[0], lambda(none): False)(data[1:]))
loop()
def from_ejabberd():
input_length = sys.stdin.read(2)
(size,) = unpack('>h', input_length)
return sys.stdin.read(size).split(':')
def to_ejabberd(result):
if result:
sys.stdout.write('\x00\x02\x00\x01')
else:
sys.stdout.write('\x00\x02\x00\x00')
sys.stdout.flush()
if __name__ == "__main__":
try:
loop()
except error:
pass
I didn't created the communication with Ejabberd from_ejabberd() and to_ejabberd(), and unfortunately can't find back the sources.
A couple of places propose this solution:
window.addEventListener('unhandledrejection', function(err) {
window.__karma__.error(err); // yeah private API ¯\_(ツ)_/¯
});
But it throws:
Uncaught TypeError: Cannot read property 'error' of undefined
I'm able to get reports of unhandled rejections with the following setup:
karma.conf.js:
module.exports = function(config) {
config.set({
basePath: '',
frameworks: ['mocha'],
files: [
'setup.js',
'test.js',
],
exclude: [],
preprocessors: {},
reporters: ['progress'],
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome'],
singleRun: false,
concurrency: Infinity
});
};
setup.js:
window.addEventListener('unhandledrejection', function(ev) {
window.__karma__.error("unhandled rejection: " + ev.reason.message);
});
test.js:
it("test 1", () => {
Promise.reject(new Error("Q"));
});
it("test 2", (done) => {
setTimeout(done, 1000);
});
Separating setup.js from test.js is not necessary. I just like to have such setup code separate from the tests proper.
When I run karma start --single-run I get:
25 01 2017 07:20:07.521:INFO [karma]: Karma v1.4.0 server started at http://0.0.0.0:9876/
25 01 2017 07:20:07.523:INFO [launcher]: Launching browser Chrome with unlimited concurrency
25 01 2017 07:20:07.528:INFO [launcher]: Starting browser Chrome
25 01 2017 07:20:08.071:INFO [Chrome 55.0.2883 (Linux 0.0.0)]: Connected on socket g-BGwMfQLsQM128IAAAA with id 22107710
Chrome 55.0.2883 (Linux 0.0.0) ERROR
unhandled rejection: Q
Chrome 55.0.2883 (Linux 0.0.0): Executed 1 of 2 ERROR (0.006 secs / 0.001 secs)
Caveat
Reports of unhandled rejections are asynchronous. This has a few consequences.
The example I gave has a 2nd test that takes 1 second to complete. This gives time to the browser to report the unhandled rejection in the 1st test. Without having this delay, Karma terminates without detecting the unhandled rejection.
Another issue is that an unhandled rejection caused by test X may be discovered while test X+1 is running. The runner's report may make it look like X+1 is the test the caused the issue.
I am running Mongo 2.2.1 in ec2, I have enabled profiling and I'm sending a slow op summary every 180 sec to graphite. Every now and again the script reports an error (BSONElement: bad type 113) and if I log to the Mongo shell and run a db.system.profile.find() I get a more detailed report:
Mon Feb 18 09:12:48 Assertion: 10320:BSONElement: bad type 113
0x6073f1 0x5d1aa9 0x4b0d98 0x5c17a6 0x6b3f35 0x6b6a2c 0x69be0a 0x6aa13f 0x668e46 0x668ec2 0x66a2ce 0x5cbcc4 0x4a4a14 0x4a67e6 0x7f1519bb776d 0x49f669
mongo(_ZN5mongo15printStackTraceERSo+0x21) [0x6073f1]
mongo(_ZN5mongo11msgassertedEiPKc+0x99) [0x5d1aa9]
mongo(_ZNK5mongo11BSONElement4sizeEv+0x1d8) [0x4b0d98]
mongo(_ZN5mongo16resolveBSONFieldEP9JSContextP8JSObjectljPS3_+0x146) [0x5c17a6]
mongo(js_LookupPropertyWithFlags+0x3f5) [0x6b3f35]
mongo(js_GetProperty+0x7c) [0x6b6a2c]
mongo(js_Interpret+0x10ea) [0x69be0a]
mongo(js_Execute+0x36f) [0x6aa13f]
mongo(JS_EvaluateUCScriptForPrincipals+0x66) [0x668e46]
mongo(JS_EvaluateUCScript+0x22) [0x668ec2]
mongo(JS_EvaluateScript+0x6e) [0x66a2ce]
mongo(_ZN5mongo7SMScope4execERKNS_10StringDataERKSsbbbi+0x144) [0x5cbcc4]
mongo(_Z5_mainiPPc+0x26c4) [0x4a4a14]
mongo(main+0x26) [0x4a67e6]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed) [0x7f1519bb776d]
mongo(__gxx_personality_v0+0x2a1) [0x49f669]
Error: BSONElement: bad type 113
In the logs I can see when the script has run and reported the error:
Mon Feb 18 09:26:21 [conn577444] Assertion: 10320:BSONElement: bad type 113
0xaf8c41 0xabedb9 0x570aab 0x7fc84c 0x7fe2ca 0x8057a7 0x806268 0x651171 0x82c71e 0x82c7d4 0x8318f6 0x8345f3 0x7b0b0d 0x7b20e2 0x56fe42 0xae6ed1 0x7f0eb2526e9a 0x7f0eb183c4bd
/opt/mongodb/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0xaf8c41]
/opt/mongodb/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x99) [0xabedb9]
/opt/mongodb/bin/mongod(_ZNK5mongo11BSONElement4sizeEv+0x1cb) [0x570aab]
/opt/mongodb/bin/mongod(_ZNK5mongo7Matcher13matchesDottedEPKcRKNS_11BSONElementERKNS_7BSONObjEiRKNS_14ElementMatcherEbPNS_12MatchDetailsE+0x153c) [0x7fc84c]
/opt/mongodb/bin/mongod(_ZNK5mongo7Matcher7matchesERKNS_7BSONObjEPNS_12MatchDetailsE+0xfa) [0x7fe2ca]
/opt/mongodb/bin/mongod(_ZNK5mongo19CoveredIndexMatcher7matchesERKNS_7BSONObjERKNS_7DiskLocEPNS_12MatchDetailsEb+0xc7) [0x8057a7]
/opt/mongodb/bin/mongod(_ZNK5mongo19CoveredIndexMatcher14matchesCurrentEPNS_6CursorEPNS_12MatchDetailsE+0xa8) [0x806268]
/opt/mongodb/bin/mongod(_ZN5mongo6Cursor14currentMatchesEPNS_12MatchDetailsE+0x41) [0x651171]
/opt/mongodb/bin/mongod(_ZN5mongo20QueryResponseBuilder14currentMatchesERNS_12MatchDetailsE+0x1e) [0x82c71e]
/opt/mongodb/bin/mongod(_ZN5mongo20QueryResponseBuilder8addMatchEv+0x44) [0x82c7d4]
/opt/mongodb/bin/mongod(_ZN5mongo23queryWithQueryOptimizerEiRKSsRKNS_7BSONObjERNS_5CurOpES4_S4_RKN5boost10shared_ptrINS_11ParsedQueryEEES4_RKNS_17ShardChunkVersionERNS7_10scoped_ptrINS_25PageFaultRetryableSectionEEERNSG_INS_19NoPageFaultsAllowedEEERNS_7MessageE+0x376) [0x8318f6]
/opt/mongodb/bin/mongod(_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x1a93) [0x8345f3]
/opt/mongodb/bin/mongod() [0x7b0b0d]
/opt/mongodb/bin/mongod(_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x3a2) [0x7b20e2]
/opt/mongodb/bin/mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x82) [0x56fe42]
/opt/mongodb/bin/mongod(_ZN5mongo3pms9threadRunEPNS_13MessagingPortE+0x411) [0xae6ed1]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a) [0x7f0eb2526e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f0eb183c4bd]
Mon Feb 18 09:26:21 [conn577444] assertion 10320 BSONElement: bad type 113 ns:mydb.system.profile query:{ ts: { $gte: new Date(1361179280953), $lte: new Date(1361179580953) } }
Mon Feb 18 09:26:21 [conn577444] problem detected during query over mydb.system.profile : { $err: "BSONElement: bad type 113", code: 10320 }
The script will query the profile collection for slow operations since last time it ran ( ts: { $gte: new Date(1361179280953), $lte: new Date(1361179580953) })
I am fairly new to MongoDB, any help appreciated.
Thanks,
Simone
This generally means you have data corruption, caused possibly by an unclean shutdown. If you do not have too much data, you could run a repair on the database - or, preferably, if you have a backup somewhere, restore your data from the backup.
(It is always recommended that you run with replication, partially so that if you experience corruption you have a data backup.)