Document not archiving in Opentext Archive Server 10.5 - ecm

Documents are not getting archived from external directory to pipeline.
After putting the document in external directory, the LOG file is gone but leaving behind the remaining 3 files.
While looking the tomcat AS logs following is the error and warning:
ERROR [http-bio-8080-exec-5] com.opentext.ecm.asm.objects.ReqCounter -- 128 . inc ReqCounter.java:55 Security Alert: At least 5 requests with wrong docid occurred from the same client
WARN [http-bio-8080-exec-7] com.opentext.ecm.asm.rcs.service.impl.DocumentServiceImplHelper -- Cannot access document with id.
Following are the warnings in Prepdoc file:
WRN 0 18:32:07.671 ............... Dsh::getHostAndAidObject dsh.cxx-19100 archive 'ZO' is not accessable (access mode = 'c') on server of type 'orc' (o=original,r=replication,c=cache)
WRN 0 18:32:07.671 .............. Dsh::dsReserveDocId
dsh.cxx-14006 cannot get host for archive 'ZO' and access mode 'c';
the call of function getHostAndAidObject() failed

Go to the archive location and check for the doc id.. use command prompt to dinfo to see if you still have the error.

Related

setting dropbox as custom backp on whm/cpanel error

I am trying to set dropbox as custom backup destination following below cpanel blog. The connection is working, but the backup files are not being transferred to DropBox. And when I press validate to custom backup destination it gives following error .
https://blog.cpanel.com/cpanel-whm-custom-backup-transport-example-dropbox/
Error: Validation for transport “dropbox” failed: Could not list files in
destination: Executed /usr/local/bin/backup_transport_dropbox.pl ls /
remotehost remoteuser : 2018-08-26T15:54:21 [WebService::Dropbox] [ERROR]
https://api.dropboxapi.com/2/files/list_folder {"path":"/"} -> [400] Error in
call to API function "files/list_folder": request body: path: Specify the root
folder as an empty string rather than as "/". at
/usr/local/share/perl5/WebService/Dropbox.pm line 184.
I am new to dropbox api and have no idea of perl so could not figure out what is discusses on below links.
https://github.com/silexlabs/unifile/issues/77
The error message is correctly indicating that the Dropbox API expects the value "" when referring to the root alone. The code is instead sending the value "/". This looks like a bug in the code.
It looks like you've already opened an issue with the developer for this:
https://github.com/CpanelInc/backup-transport-dropbox/issues/3
They should update the code to use "" when referring to the root folder on Dropbox.

How to configure open-fire server with HttpUploadComponent for offline file transferring?

I use Openfire with Conversations and would like to implement offline file transferring with HttpUploadComponent, I have copied httpupload folder inside openfire folder as below screenshot:
Then I did below configurations in openfire:
I also installed Python and configured config.yml file in httpupload folder like below:
component_jid: upload.192.168.105.164
component_secret: 1234
component_port: 5275
storage_path : ./var/lib/httpupload/
max_file_size: 20971520 #20MiB
http_address: 0.0.0.0 #use 0.0.0.0 if you don't want to use a proxy
http_port: 8080
get_url : http://192.168.105.164:8080/
put_url : http://192.168.105.164:8080/
expire_interval: 82800 #time in secs between expiry runs (82800 secs = 23 hours). set to '0' to disable
expire_maxage: 2592000 #files older than this (in secs) get deleted by expiry runs (2592000 = 30 days)
user_quota_hard: 104857600 #100MiB. set to '0' to disable rejection on uploads over hard quota
user_quota_soft: 78643200 #75MiB. set to '0' to disable deletion of old uploads over soft quota an expiry runs
allow_web_clients: true #answer OPTIONS requests to allow web clients to upload files
I did run Httpupload server as well :
After starting python server, if you go openfire\serversetting\external components*view the external components* [in the first line], you'll see whether session is created or not:
After all of this, when I want to send a file from android client its failling and It gives me this error:
Where is my problem? Thanks.
In attached error screenshot, the last word is 403, which is indicating that it's related to authorization on HttpUploadComponent end.
Now I started to check the code of this component and on line 83 of https://github.com/siacs/HttpUploadComponent/blob/master/httpupload/server.py it is picking the variable "storage_path" from configuration to place the file in that directory.
Now as mentioned in your question, you have set storage_path : ./var/lib/httpupload/
But you are on a windows machine and this path is invalid.
Try giving a valid windows os path.

Determining reason that chrome enterprise-deployed extension will not load

I have a locally sourced enterprise-installed extension which is installed via the ExtensionInstallForcelist policy. The policy is visible on the chrome://policy page with a status of OK. The URL to the update manifest xml file is of the form "file:///c:/program%20files/xxx/updates.xml" The .crx file is also located in the same folder "file:///c:/program%20files/xxx/myextension.crx" I can successfully browse to both of those files. Yet the extension does not load.
Is there any way to determine the reason that chrome is not loading the extension? I do not see any indication of error. I have opened up the inspect developer window on the extension page, but see no console messages or exceptions. Is there a log file I could look at, or some other means of determining why the extension is not loading?
UPDATE: Turned on logging and see the following:
[3752:3156:0327/171545:WARNING:extension_error_reporter.cc(79)] Extension error: Expected ID "kfegeekbdleinhdfillngiggbjiflghe", but ID was "ijdpkgandgfnpbammiehlfpfpboclodn".
[3752:3440:0327/172253:WARNING:extension_protocols.cc(422)] Failed to GetPathForExtension: kfegeekbdleinhdfillngiggbjiflghe
[3752:3440:0327/172253:WARNING:url_request_job_manager.cc(89)] Failed to map: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/
[3752:3440:0327/172253:VERBOSE1:resource_loader.cc(364)] OnResponseStarted: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/
[3752:3440:0327/172253:VERBOSE1:resource_loader.cc(778)] ResponseCompleted: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/
[3752:3156:0327/172253:VERBOSE1:navigator_impl.cc(298)] Failed Provisional Load: chrome-extension://kfegeekbdleinhdfillngiggbjiflghe/, error_code: -2, error_description: Unknown error., showing_repost_interstitial: 0, frame_id: 1

Chef Deployment with irrelevant default symlinks

I am trying to deploy my application code with Chef, which is working for one node and failing on another. I cannot determine why it works for one node and not another when they have the exact same config, but I can at least try to debug the problem on the node that fails.
deploy_revision app_config['deploy_dir'] do
scm_provider Chef::Provider::Git
repo app_config['deploy_repo']
revision app_config['deploy_branch']
if secrets["deploy_key"]
git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
end
enable_submodules true
shallow_clone false
symlink_before_migrate({}) # Symlinks to add before running db migrations
purge_before_symlink [] # Directories to delete before adding symlinks
create_dirs_before_symlink [] # Directories to create before adding symlinks
# symlinks()
action :deploy
restart_command do
service "apache2" do action :restart; end
end
end
This is my recipe for deploying the code. Notice that I have tried disabling symlinking entirely, as Chef always jams its own defaults in. Even with this I get the error:
================================================================================
Error executing action `deploy` on resource 'deploy_revision[/var/www]'
================================================================================
Chef::Exceptions::FileNotFound
------------------------------
Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Resource Declaration:
---------------------
# In /var/chef/cache/cookbooks/kapture/recipes/api.rb
68:
69: deploy_revision app_config['deploy_dir'] do
70: scm_provider Chef::Provider::Git
71: repo app_config['deploy_repo']
72: revision app_config['deploy_branch']
73: if secrets["deploy_key"]
74: git_ssh_wrapper "#{app_config['deploy_dir']}/git-ssh-wrapper" # For private Git repos
75: end
76: enable_submodules true
Compiled Resource:
------------------
# Declared in /var/chef/cache/cookbooks/kapture/recipes/api.rb:69:in `from_file'
deploy_revision("/var/www") do
destination "/var/www/shared/cached-copy"
symlink_before_migrate {"config/database.yml"=>"config/database.yml"}
updated_by_last_action true
restart_command #<Proc:0x00007f40f366e5a0#/var/chef/cache/cookbooks/kapture/recipes/api.rb:82>
repository_cache "cached-copy"
retries 0
keep_releases 5
create_dirs_before_symlink ["tmp", "public", "config"]
updated true
provider Chef::Provider::Deploy::Revision
enable_submodules true
deploy_to "/var/www"
current_path "/var/www/current"
recipe_name "api"
revision "HEAD"
scm_provider Chef::Provider::Git
purge_before_symlink ["log", "tmp/pids", "public/system"]
git_ssh_wrapper "/var/www/git-ssh-wrapper"
remote "origin"
shared_path "/var/www/shared"
cookbook_name "kapture"
symlinks {"log"=>"log", "system"=>"public/system", "pids"=>"tmp/pids"}
action [:deploy]
repo "git#github.com:kapture/api.git"
retry_delay 2
end
[2012-09-24T15:42:07+00:00] ERROR: Running exception handlers
[2012-09-24T15:42:07+00:00] FATAL: Saving node information to /var/chef/cache/failed-run-data.json
[2012-09-24T15:42:07+00:00] ERROR: Exception handlers complete
[2012-09-24T15:42:07+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
[2012-09-24T15:42:07+00:00] FATAL: Chef::Exceptions::FileNotFound: deploy_revision[/var/www] (kapture::api line 69) had an error: Chef::Exceptions::FileNotFound: Cannot symlink /var/www/shared/config/database.yml to /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml before migrate: No such file or directory - /var/www/shared/config/database.yml or /var/www/releases/7404041cf8859a35de90ae72091bea1628391075/config/database.yml
Here you can see it mention database.yml, tmp/, system/ and pids folders, all of which are defaults that Chef likes to set (see related bug)
Question 1
What are these symlinks for and how do I know if I even need any. What sort of things am I symlinking? I will be using migrations, so if they are useful for the migration then I'll need them working.
I have read the documentation many times and it just doesn't explain this is plain English - at least not that I have found.
Question 2
If I do not require them, how can I disable symlinking entirely? Following the examples in that bug report have not helped.
Clear out all the symlink attributes.
deploy_revision("/var/www") do
# ...
symlink_before_migrate.clear
create_dirs_before_symlink.clear
purge_before_symlink.clear
symlinks.clear
end
Make sure the deployment directory in shared has the proper directory structure (/var/www/shared/[log,pids,system,config]) and that all config files necessary for your application are in the config directory.
Your recipe for your application's cookbook should have an array of directory names to create (recursively) so that you won't run into this error again.
The symlinks are there so that while your application code will continue to evolve, you can share the log, pids, and system folder by symlinking shared/log to current/log and so on and so forth..
Chef happens to cache the directory structure - somehow, I haven't dug into that - with this troll application cookbook. It's something in the deploy resource I believe - I never use that - but you can fix it by deleting the directory structure it creates in /var/derp or whatever. Also ensure your tmp directory is setup.
A couple reasons this may be an issue:
File permissions are incorrect
The currently running chef user cannot access the file
You are using the application cookbook in a configuration for rails deploys and your application does not have the same directory structure.
It's definitely caused by Chef caching the state of the deploy somewhere, then reading that state out of it's cache - wherever that is - and then reusing that. I would look at either the application cookbook to see for any persistance, and if you fail at finding it there, look in the deploy resource from chef itself.

Error while configuring Google App Engine on Pydev in Eclipse

i am trying to configure Google App Engine on Eclipse and use it to run a python application locally (on the local host):
for this i used following tutorial as a guide:
http://www.mkyong.com/google-app-engine/google-app-engine-python-hello-world-example-using-eclipse/
i followed the steps properly but when i try to use the configuration i get errors the console output is:
Console Output:
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
WARNING 2012-06-20 14:53:01,451 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 126, in
run_file(file, globals())
File "C:\Program Files (x86)\Google\google_appengine\dev_appserver.py", line 122, in run_file
execfile(script_path, globals_)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 694, in
sys.exit(main(sys.argv))
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver_main.py", line 582, in main
root_path, {}, default_partition=default_partition)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3142, in LoadAppConfig
raise AppConfigNotFoundError
google.appengine.tools.dev_appserver.AppConfigNotFoundError
The configuration i am using is:
Windows 7 64 bit
python 2.7
Eclipse Helios
What could be the possible mistake in configuring the GAE?
Additional info : when i try to use the project with GAE manually(ie by using the launcher) it works
Update:
i experimented and discovered that since the workstation and the python installation folder is not in the same directory i get these errors
got the hint from here:
File
"C:\Program Files(x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py"
line 582, in main
root_path, {}, default_partition=default_partition)
but when i made another workspace in the same partition i got this as console output and the the local host is still not working
output
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
C:\Program Files (x86)\Google\google_appengine\google\appengine\api\search\search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
WARNING 2012-06-20 17:20:56,719 rdbms_mysqldb.py:74] The rdbms API is not available because the MySQLdb library could not be loaded.
Runs a development application server for an application.
dev_appserver.py [options]
Application root must be the path to the application to run in this server.
Must contain a valid app.yaml or app.yml file.
Options:
--address=ADDRESS, -a ADDRESS Address to which this server should bind(Defaultlocalhost).
--clear_datastore, -c Clear the Datastore on startup. (Default false)
--debug, -d Use debug logging. (Default false)
--help, -h View this helpful message.
--port=PORT, -p PORT Port for the server to run on. (Default 8080)
--allow_skipped_files Allow access to files matched by app.yaml's
skipped_files (default False)
--auth_domain Authorization domain that this app runs in.
(Default gmail.com)
--backends Run the dev_appserver with backends support
(multiprocess mode).
--blobstore_path=DIR Path to directory to use for storing Blobstore
file stub data.
--clear_prospective_search Clear the Prospective Search subscription index
(Default false).
--datastore_path=DS_FILE Path to file to use for storing Datastore file
stub data.
(Defaultc:\users\anukoo~1\appdata\local\temp\dev_appserver.datastore)
--debug_imports Enables debug logging for module imports, showing
search paths used for finding modules and any
errors encountered during the import process.
--default_partition Default partition to use in the APPLICATION_ID.
(Default dev)
--disable_static_caching Never allow the browser to cache static files.
(Default enable if expiration set in app.yaml)
--disable_task_running When supplied, tasks will not be automatically
run after submission and must be run manually
in the local admin console.
--enable_sendmail Enable sendmail when SMTP not configured.
(Default false)
--high_replication Use the high replication datastore consistency
model. (Default false).
--history_path=PATH Path to use for storing Datastore history.
(Default c:\users\anukoo~1\appdata\local\temp\dev_appserver.datastore.history)
--multiprocess_min_port When running in multiprocess mode, specifies the
lowest port value to use when choosing ports. If
set to 0, select random ports.
(Default 9000)
--mysql_host=HOSTNAME MySQL database host.
Used by the Cloud SQL (rdbms) stub.
(Default 'localhost')
--mysql_port=PORT MySQL port to connect to.
Used by the Cloud SQL (rdbms) stub.
(Default 3306)
--mysql_user=USER MySQL user to connect as.
Used by the Cloud SQL (rdbms) stub.
(Default )
--mysql_password=PASSWORD MySQL password to use.
Used by the Cloud SQL (rdbms) stub.
(Default '')
--mysql_socket=PATH MySQL Unix socket file path.
Used by the Cloud SQL (rdbms) stub.
(Default '')
--persist_logs Enables storage of all request and application
logs to enable later access. (Default false).
--require_indexes Disallows queries that require composite indexes
not defined in index.yaml.
--show_mail_body Log the body of emails in mail stub.
(Default false)
--skip_sdk_update_check Skip checking for SDK updates. If false, fall back
to opt_in setting specified in .appcfg_nag
(Default false)
--smtp_host=HOSTNAME SMTP host to send test mail to. Leaving this
unset will disable SMTP mail sending.
(Default '')
--smtp_port=PORT SMTP port to send test mail to.
(Default 25)
--smtp_user=USER SMTP user to connect as. Stub will only attempt
to login if this field is non-empty.
(Default '').
--smtp_password=PASSWORD Password for SMTP server.
(Default '')
--task_retry_seconds How long to wait in seconds before retrying a
task after it fails during execution.
(Default '30')
--use_sqlite Use the new, SQLite based datastore stub.
(Default false)
Invalid arguments
seems like the arguments to dev_appserver.py are incorrect any ideas
If you did the same mistake as mine,then its a very high chance that you have space in the name of your directory
The warnings about deprecated things in search can be ignored. As can the message about the rdbms API if you don't plan on using that.
AppConfigNotFoundError happens when there is no app.yaml in the directory passed to dev_appserver.py. If you followed those instructions, then your app.yaml would be in the 'src' directory, and the 'program arguments' in the build command would be ${project_loc}/src - is this the case? When you run from the command-line, and see it work, what command are you running, and from what location?