How inspect RScript requests at dashDB? - ibm-cloud

I try to make a POST for RScript in dashDB. Normally my code return 200 for the request, but my code doesn't run.
I would like to inspect a log in dashDB, but I don't find in the documentation, how I can make that?
Thanks!

Related

How to get the response and logs of a Jenkins test without using the interface, from command line using curl?

I am using curl command to build a job in Jenkins:
curl --user "admin:passwd" -X POST http://localhost:8080/job/jobname/build
How can I check if the test was successful or failed and how to get the logs of that build from command line only, preferably using curl?
If you have BlueOcean plugin installed, you can query its API. This usually returns JSON output that you might need to question further.
First, you need to find the build number triggered by your curl command. Then, you need to wait until your build is over. Then, you can question the result.
A good start is:
"curl -s ${your_jenkins}/blue/rest/organizations/jenkins/pipelines/${jobname}/runs/${buildbumber}/nodes/?limit=10000"

How to start TensorFlow Serving ModelServer with the REST API endpoint

I'm trying to make use of the new possibility to send HTTP requests to the TensorFlow ModelServer. However, when I try to run the following, it doesn't recognize the --rest_api_port argument:
tensorflow_model_server --rest_api_port=8501 \
--model_name=half_plus_three \
--model_base_path=$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_three/
unknown argument: --rest_api_port=8501
I've encountered the same error. I looked through the source code.
In main.cc file, there is no rest_api_port option in the source code version r1.7 below.
Due to this, if you want to use REST, you need to use tensorflow-serving r1.8 above or implement it your self.
Hope this is helpful to you.

curl mydb/_changes?feed=continuous disconnects after about 20 seconds

Based on the API reference, the continuous parameter will keep the connection open. However, the following curl command returns after about 20 seconds, which seems not true according to the API reference. Could anyone explain this? Thanks.
curl https://cloudant-URL-bluemix.cloudant.com/mydb/_changes?feed=continuous --http1.1
Returns:
"last_seq":"11-g1AAAAYeeJy11M1NwzAUB3DTIiFOdAM4wLElbpzYPtENYAPw84dKlaYItWfYADaADWAD2AA2gA3KBsWuEampSpIiLo4UWb9_3nt2MoRQq99U6ECBHF3qngIcdWDYlrKtRNaO4o7MRhMl8nEn1-PMbm8IBDuz2WzQb4qNoX2xRSQXkkZVERe3_70zLUmDll1h9ysQzQM5JEaBDdye5Eqb81yrEE3K0D2HHgYoIZIow1ej3TI0cuhRiBolKFtCa3Qbek49DlXNmDB69afSMvTEoacBGiexTDX9Q1PPHDoKUYkBuuSX-nGZeuHUq-DAqTQ2WsRVnZodzzftiq7tw4beuNTGPFXTLgCV_3PMfeitD70rQqXAnLGkIlVvYD7z3mc-FEMTmCjtCl33Jnj40cNPDm76YjTXkUgrUmuN7dmHvhTVUA1M8KUjWHc0rx5-W7iF1EQGx-tfGA-_e3hazJwTLhmBitSPNlWbzYcPXfyncqmxYovVDD4Bc-fgbg","pending":0}
Try:
curl 'https://cloudant-URL-bluemix.cloudant.com/mydb/_changes?feed=continuous&heatbeat=1000'
It's probably timing out if there are no changes to report on. See
http://docs.couchdb.org/en/2.0.0/api/database/changes.html

Issue processing RSS feed with Perl/CURL

I have this RSS feed URL:
http://mediosymedia.com/wp-content/plugins/nextgen-gallery/xml/media-rss.php
A client is trying to access to this RSS programmatically via PERL like this:
# Fetch the content available in source HTTP URL
`curl -g --compressed "$source_url" > $tempRSSFile`;
Where $source_url is http://mediosymedia.com/wp-content/plugins/nextgen-gallery/xml/media-rss.php
But they said that they couldn't access the feed this way with my URL, I know nothing about PERL so, you guys could point me in the right direction to make a compatible URL for the feed?
Thanks a lot!
The problem has nothing to do with perl. If you run the curl command from cmdline, then you get a Error 406 - Not Acceptable error. One possibility is to trick mod_security by using another User-Agent header. This works right now:
curl --user-agent Mozilla/5.0 -g --compressed http://mediosymedia.com/wp-content/plugins/nextgen-gallery/xml/media-rss.php > /tmp/feed.rss
But better, as amon already said, is to fix the server and allow RSS download also for curl.

wget can't download - 404 error

I tried to download an image using wget but got an error like the following.
--2011-10-01 16:45:42-- http://www.icerts.com/images/logo.jpg
Resolving www.icerts.com... 97.74.86.3
Connecting to www.icerts.com|97.74.86.3|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2011-10-01 16:45:43 ERROR 404: Not Found.
My browser has no problem loading the image.
What's the problem?
curl can't download either.
Thanks.
Sam
You need to add the referer field in the headers of the HTTP request. With wget, you just need the --header arg :
wget http://www.icerts.com/images/logo.jpg --header "Referer: www.icerts.com"
And the result :
--2011-10-02 02:00:18-- http://www.icerts.com/images/logo.jpg
Résolution de www.icerts.com (www.icerts.com)... 97.74.86.3
Connexion vers www.icerts.com (www.icerts.com)|97.74.86.3|:80...connecté.
requête HTTP transmise, en attente de la réponse...200 OK
Longueur: 6102 (6,0K) [image/jpeg]
Sauvegarde en : «logo.jpg»
I had the same problem with a Google Docs URL. Enclosing the URL in quotes did the trick for me:
wget "https://docs.google.com/spreadsheets/export?format=tsv&id=1sSi9f6m-zKteoXA4r4Yq-zfdmL4rjlZRt38mejpdhC23" -O sheet.tsv
You will also get a 404 error if you are using ipv6 and the server only accepts ipv4.
To use ipv4, make a request adding -4:
wget -4 http://www.php.net/get/php-5.4.13.tar.gz/from/this/mirror
I had same problem.
Solved using single quotes like this:
$ wget 'http://www.icerts.com/images/logo.jpg'
wget version in use:
$ wget --version
GNU Wget 1.11.4 Red Hat modified
Wget 404 error also always happens if you want to download the pages from Wordpress-website by typing
wget -r http://somewebsite.com
If this website is built using Wordpress you'll get such an error:
ERROR 404: Not Found.
There's no way to mirror Wordpress-website because the website content is stored in the database and wget is not able to grab .php files. That's why you get Wget 404 error.
I know it's not this question's case, because Sam only wants to download a single picture, but it can be helpful for others.
Actually I don't know what is the reason exactly, I have faced this like of problem.
if you have the domain's IP address (ex 208.113.139.4), please use the IP address instead of domain (in this case www.icerts.com)
wget 192.243.111.11/images/logo.jpg
Go to find the IP from URL https://ipinfo.info/html/ip_checker.php
I want to add something to #blotus's answer,
In case adding the referrer header does not solve the issue, May be you are using the wrong referrer (Sometimes the referrer is different from the URL's domain name).
Paste the URL on a web browser and find the referrer from developer tools (Network -> Request Headers).
I met exactly the same problem while setting up GitHub actions with Cygwin. Only after I used wget --debug <url>, I realized that URL is appended with 0xd symbol which is \r (carriage return).
For this kind of problem there is the solution described in docs:
you can also use igncr in the SHELLOPTS environment variable
So I added the following lines to my YAML script to make wget work properly, as well as other shell commands in my GHA workflow:
env:
SHELLOPTS: igncr