Schedule spider with SCRAPYD - scrapyd

I'am trying to schedule a spider run, i wrote:
curl http://localhost:6800/schedule.json -d project=elettronica -d spider=Prokoo
return:
{"status": "error", "message": "'elettronica'"}
In scrapyd.log i see:
2014-04-16 17:55:16+0200 [HTTPChannel,8,87.18.14.190] 87.18.14.190 - - [16/Apr/2014:15:55:16 +0000] "GET /schedule.json HTTP/1.1" 200 61 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.116 Safari/537.36"
2014-04-16 17:55:35+0200 [HTTPChannel,10,127.0.0.1] Unhandled Error
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/web/http.py", line 1618, in allContentReceived
req.requestReceived(command, path, version)
File "/usr/lib/python2.7/dist-packages/twisted/web/http.py", line 773, in requestReceived
self.process()
File "/usr/lib/python2.7/dist-packages/twisted/web/server.py", line 132, in process
self.render(resrc)
File "/usr/lib/python2.7/dist-packages/twisted/web/server.py", line 167, in render
body = resrc.render(self)
--- <exception caught here> ---
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 18, in render
return JsonResource.render(self, txrequest)
File "/usr/lib/pymodules/python2.7/scrapy/utils/txweb.py", line 10, in render
r = resource.Resource.render(self, txrequest)
File "/usr/lib/python2.7/dist-packages/twisted/web/resource.py", line 216, in render
return m(request)
File "/usr/lib/pymodules/python2.7/scrapyd/webservice.py", line 37, in render_POST
self.root.scheduler.schedule(project, spider, **args)
File "/usr/lib/pymodules/python2.7/scrapyd/scheduler.py", line 15, in schedule
q = self.queues[project]
exceptions.KeyError: 'elettronica'
Anyone can help me?
Regards
Dennis

According to your error message, you have typed the name of the project wrong
check this line again
curl http://localhost:6800/schedule.json -d project=elettronica -d spider=Prokoo
and be sure to correct the project name elettronica

Related

kubernetes.client.exceptions.ApiException: (0) Reason: Handshake status 500 Internal Server Error

getting the above exception while deploying dags in the pipeline.
The log is as follows:
************************************************************************************************************
* *
* *
* Deploying 'dags'... *
* *
* *
************************************************************************************************************
[2022-12-16 14:09:48,076] {io} INFO - Current directory: /artifacts/dags
[2022-12-16 14:09:48,076] {copy_deploy_tool} INFO - Deploy 'dags' by copying files...
[2022-12-16 14:09:48,083] {deploy_tool} INFO - saving values.yaml...
[2022-12-16 14:09:48,162] {copy_deploy_tool} INFO - Removing files from 'development:airflow-5db795dd7c-d586h:/root/airflow/dags'
[2022-12-16 14:09:48,264] {deploy} ERROR - Execution failed for project: dags
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/kubernetes/stream/ws_client.py", line 296, in websocket_call
client = WSClient(configuration, get_websocket_url(url), headers, capture_all)
File "/usr/local/lib/python3.6/dist-packages/kubernetes/stream/ws_client.py", line 94, in __init__
self.sock.connect(url, header=header)
File "/usr/local/lib/python3.6/dist-packages/websocket/_core.py", line 253, in connect
self.handshake_response = handshake(self.sock, *addrs, **options)
File "/usr/local/lib/python3.6/dist-packages/websocket/_handshake.py", line 57, in handshake
status, resp = _get_resp_headers(sock)
File "/usr/local/lib/python3.6/dist-packages/websocket/_handshake.py", line 143, in _get_resp_headers
raise WebSocketBadStatusException("Handshake status %d %s", status, status_message, resp_headers)
websocket._exceptions.WebSocketBadStatusException: Handshake status 500 Internal Server Error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/commands/deploy/deploy.py", line 65, in __try_execute
deployment=project[DEPLOYMENT],
File "/usr/local/lib/python3.6/dist-packages/tools/deploy/copy_deploy_tool.py", line 50, in run
namespace, container, pod_name, command, api_client=api_client
File "/usr/local/lib/python3.6/dist-packages/helpers/kubernetes.py", line 133, in run_pod_command
stdout=True,
File "/usr/local/lib/python3.6/dist-packages/kubernetes/stream/stream.py", line 35, in stream
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api/core_v1_api.py", line 841, in connect_get_namespaced_pod_exec
(data) = self.connect_get_namespaced_pod_exec_with_http_info(name, namespace, **kwargs) # noqa: E501
File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api/core_v1_api.py", line 941, in connect_get_namespaced_pod_exec_with_http_info
collection_formats=collection_formats)
File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 345, in call_api
_preload_content, _request_timeout)
File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 176, in __call_api
_request_timeout=_request_timeout)
File "/usr/local/lib/python3.6/dist-packages/kubernetes/stream/stream.py", line 30, in _intercept_request_call
return ws_client.websocket_call(config, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/kubernetes/stream/ws_client.py", line 302, in websocket_call
raise ApiException(status=0, reason=str(e))
kubernetes.client.rest.ApiException: (0)
Reason: Handshake status 500 Internal Server Error
[2022-12-16 14:09:49,631] {shell} INFO - doi: Deployment record uploaded successfully
[2022-12-16 14:09:49,631] {shell} INFO - OK
[2022-12-16 14:09:49,635] {io} INFO - Current directory: /artifacts
[2022-12-16 14:09:49,635] {pretty_info} INFO - ```
Usually, it happens when pod becomes into the Running state, but no running containers on it (0/1). And if you run execute command to this pod and container you will get 500 Internal server error instead of errors related to the real issue(the container is not running).
Check that all containers are running.
if all([ p.status.phase == "Running" for p in my_pods]) \
and all([c.state.running for p in my_pods for c in p.status.container_statuses]):
Also refer to this Stackpost and Github issue.

Does Caffee Model work on images downloaded from google search?

So, I started with this article https://towardsdatascience.com/predict-age-and-gender-using-convolutional-neural-network-and-opencv-fd90390e3ce6 for age and gender detection and, I am facing a trivial problem. I am not able to run caffe on pictures downloaded from google. Actually, it's running only on the pictures that I take from my phone or webcam. Is there any specific reason or am I doing something incorrectly? Also I wrapping all of this with flask.
for example:- when I feed this image that i took from google search https://www.hanselman.com/blog/content/binary/WindowsLiveWriter/DIYMakingaWideAngleWebcam_1478B/2010-02-16%2023-01-29.283_2.jpg
I get this as my logs:-
127.0.0.1 - - [12/Mar/2020 11:51:57] "?[1m?[35mPOST /predicWithImage HTTP/1.1?[0m" 500 -
Traceback (most recent call last):
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2463, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2449, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask_cors\extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 1866, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\_compat.py", line 39, in reraise
raise value
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 1952, in full_dispatch_request
return self.finalize_request(rv)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 1967, in finalize_request
response = self.make_response(rv)
File "C:\Users\abc\AppData\Local\Continuum\anaconda3\lib\site-
packages\flask\app.py", line 2097, in make_response
"The view function did not return a valid response. The"
TypeError: The view function did not return a valid response. The
function either returned None or ended without a return statement.
vs Logs If I feed the picture taken from my webcam/phone.
Found 1 faces
printing the blob
Gender: Male
Age Range: (15, 20)
127.0.0.1 - - [12/Mar/2020 11:56:07] "?[37mPOST /predicWithImage HTTP/1.1?[0m" 200 -
As you can see i am getting 200 for pictures from webcam vs 500 for google pictures. It's not an issue with flask wrapper, rather I tested the code directly with a picture on my disk into cv.imread(), the Caffe model is not picking it up.

Error loading kernelspec 'pyspark2.2' : Jupyter Notebook

I have installed Anaconda on Windows 10 and everytime I open Jupyter Notebook, I get the below error. Can someone please help in understanding the issue an resolution?
http://localhost:8888/?token=7902567fdc4d1d33959bd34f85ce21f842677e1efd65ea20
[I 11:35:11.235 NotebookApp] Accepting one-time-token-authenticated connection from ::1
[W 11:35:14.680 NotebookApp] Error loading kernelspec 'pyspark2.2'
Traceback (most recent call last):
File "D:\Anaconda3\envs\pythonREnv\lib\site-packages\jupyter_client\kernelspec.py", line 258, in get_all_specs
spec = self._get_kernel_spec_by_name(kname, resource_dir)
File "D:\Anaconda3\envs\pythonREnv\lib\site-packages\jupyter_client\kernelspec.py", line 201, in _get_kernel_spec_by_name
return self.kernel_spec_class.from_resource_dir(resource_dir)
File "D:\Anaconda3\envs\pythonREnv\lib\site-packages\jupyter_client\kernelspec.py", line 47, in from_resource_dir
kernel_dict = json.load(f)
File "D:\Anaconda3\envs\pythonREnv\lib\json\__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "D:\Anaconda3\envs\pythonREnv\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "D:\Anaconda3\envs\pythonREnv\lib\json\decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 15 (char 14)
Output of jupyter kernelspec list: jupyter kernelspec list
Traceback (most recent call last):
File "D:\Anaconda3\Scripts\jupyter-kernelspec-script.py", line 10, in <module>
sys.exit(KernelSpecApp.launch_instance())
File "D:\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspecapp.py", line 273, in start
return self.subapp.start()
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspecapp.py", line 44, in start
specs = self.kernel_spec_manager.get_all_specs()
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 224, in get_all_specs
} for kname in d}
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 224, in <dictcomp>
} for kname in d}
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 192, in _get_kernel_spec_by_name
return self.kernel_spec_class.from_resource_dir(resource_dir)
File "D:\Anaconda3\lib\site-packages\jupyter_client\kernelspec.py", line 40, in from_resource_dir
kernel_dict = json.load(f)
File "D:\Anaconda3\lib\json\__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "D:\Anaconda3\lib\json\__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "D:\Anaconda3\lib\json\decoder.py", line 342, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 15 (char 14)
I had created a new file under (C:\Users\myusername.ipython\kernels\pyspark2.2) the following folder which I had created in order to install Apache Spark that will enable Pixiedust on Jupyter Notebook but that is also not working.
Referred the following link to create the below kernels.json file: https://github.com/pixiedust/pixiedust/wiki/Setup:-Install-and-Configure-pixiedust
"display_name": "pySpark (Spark 2.3.1) Python 3", "language": "python", "argv": [ "D:\Anaconda3\", "-m", "ipykernel", "-f", "{connection_file}" ], "env": { "SPARK_HOME": "D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\", "PYTHONPATH": "D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\python\:D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip", "PYTHONSTARTUP": "D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\python\pyspark\shell.py", "PYSPARK_SUBMIT_ARGS": "--driver-class-path D:\Pixiedust\bin\spark-2.3.1-bin-hadoop2.7\data\mllib\* --master local[10] pyspark-shell", "SPARK_DRIVER_MEMORY":"10G", "SPARK_LOCAL_IP":"127.0.0.1" } }
Thanks
Ganesh Bhat

odoo bug after restoring database

I'm using odoo 11 on localhost and recently i did database restore from
PgAdmin 4 and from there it completed successfully. But when i chose it from odoo login screen the screen get blank and not responds. find pic attached.
I tried this to reset javascript in the browser
localhost:8069/web?debug=
but still not working.
Here are the logs:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_cron.py", line 92, in _callback
self.env['ir.actions.server'].browse(server_action_id).run()
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_actions.py", line 536, in run
res = func(action, eval_context=eval_context)
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_actions.py", line 417, in run_action_code_multi
safe_eval(action.sudo().code.strip(), eval_context, mode="exec", nocopy=True) # nocopy allows to return 'action'
File "C:\Odoo 11.0\server\odoo\tools\safe_eval.py", line 370, in safe_eval
pycompat.reraise(ValueError, ValueError('%s: "%s" while evaluating\n%r' % (ustr(type(e)), ustr(e), expr)), exc_info[2])
File "C:\Odoo 11.0\server\odoo\tools\pycompat.py", line 85, in reraise
raise value.with_traceback(tb)
File "C:\Odoo 11.0\server\odoo\tools\safe_eval.py", line 347, in safe_eval
return unsafe_eval(c, globals_dict, locals_dict)
File "", line 1, in <module>
File "C:\Odoo 11.0\server\odoo\addons\mail\models\ir_autovacuum.py", line 13, in power_on
return super(AutoVacuum, self).power_on(*args, **kwargs)
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_autovacuum.py", line 36, in power_on
self._gc_transient_models()
File "C:\Odoo 11.0\server\odoo\addons\base\ir\ir_autovacuum.py", line 20, in _gc_transient_models
model._transient_vacuum(force=True)
File "C:\Odoo 11.0\server\odoo\models.py", line 4048, in _transient_vacuum
self._transient_clean_rows_older_than(self._transient_max_hours * 60 * 60)
File "C:\Odoo 11.0\server\odoo\models.py", line 4009, in _transient_clean_rows_older_than
self.sudo().browse(ids).unlink()
File "C:\Odoo 11.0\server\odoo\models.py", line 2857, in unlink
cr.execute(query, (sub_ids,))
File "C:\Odoo 11.0\server\odoo\sql_db.py", line 155, in wrapper
return f(self, *args, **kwargs)
File "C:\Odoo 11.0\server\odoo\sql_db.py", line 232, in execute
res = self._obj.execute(query, params)
ValueError: <class 'psycopg2.IntegrityError'>: "null value in column "wizard_id" violates not-null constraint
DETAIL: Failing row contains (1, null, 8, null, null, 1, 2018-01-01 03:32:24.944104, 1, 2018-01-01 03:32:25.077112).
CONTEXT: SQL statement "UPDATE ONLY "public"."change_password_user" SET "wizard_id" = NULL WHERE $1 OPERATOR(pg_catalog.=) "wizard_id""
" while evaluating
'model.power_on()'
I think some method is not found and your odoo source code is old so get the latest code form odoo github: https://github.com/odoo/odoo
and then after in terminal throw update the all module like:
./odoo-bin -d your_database_name --db-filter your_database_name --addons-path your_all_addons_path_name -u all
this is helpfull tip
may be you have some missing files, try to restore also a folder named: filestore
that folder can be found in:
/home/$User/.local/share/Odoo/filestore
replace $User with your ubuntu username

parallel-python error: RuntimeError("Socket connection is broken")

I am using a simple program to send a function:
import pp
nodes=('mosura02','mosura03','mosura04','mosura05','mosura06',
'mosura09','mosura10','mosura11','mosura12')
nodes=('miner:60001',)
def pptester():
js=pp.Server(ppservers=nodes)
js.set_ncpus(0)
tmp=[]
for i in range(200):
tmp.append(js.submit(ppworktest,(),(),('os',)))
return tmp
def ppworktest():
import os
return os.system("uname -a")
the result is:
wkerzend#mosura:/home/wkerzend/tmp/ppython_test>ssh miner "source ~/coala_python_setup.sh;ppserver.py -d -p 60001"
2010-04-12 00:50:48,162 - pp - INFO - Creating server instance (pp-1.6.0)
2010-04-12 00:50:52,732 - pp - INFO - pp local server started with 32 workers
2010-04-12 00:50:52,732 - pp - DEBUG - Strarting network server interface=0.0.0.0 port=60001
Exception in thread client_socket:
Traceback (most recent call last):
File "/usr/lib64/python2.6/threading.py", line 525, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.6/threading.py", line 477, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/wkerzend/python_coala/bin/ppserver.py", line 161, in crun
ctype = mysocket.receive()
File "/home/wkerzend/python_coala/lib/python2.6/site-packages/pptransport.py", line 178, in receive
raise RuntimeError("Socket connection is broken")
RuntimeError: Socket connection is broken