I am using ArtifactoryGenericDownload#3 task to download .whl file from JFrog artifactory. However I want to only download the latest version which is python/de-cf-dnalib/0.7.0 but this cannot be hardcoded because the version needs to be updated from time to time. Could you please suggest any solution on how to add version control to my code ?
task:
ArtifactoryGenericDownload#3
inputs:
connection: "JFROG"
specSource: "taskConfiguration"
fileSpec: |
{
"files": [
{
"pattern": "python/*.whl",
"target": "./$(Pipeline.Workspace)/de-cf-dnalib"
}
]
}
failNoOp: true
result:
{
"files": [
{
"pattern": "python/de-cf-dnalib/*.whl",
"target": ".//datadisk/agents-home/...work/744/de-cf-dnalib"
}
]
}
Executing JFrog CLI Command: /datadisk/hostedtoolcache/jfrog/1.53.2/x64/jfrog rt dl --url="https://jfrog.io/artifactory" --access-token=*** --spec="/datadisk/agents-home/agent-0/azl-da-d-02-0/_work/744/s/downloadSpec1656914680005.json" --fail-no-op=true --dry-run=false --insecure-tls=false --threads=3 --retries=3 --validate-symlinks=false --split-count=3 --min-split=5120
[Info] Searching items to download...
[Info] [Thread 2] Downloading python/de-cf-dnalib/0.5.0/de_cf_dnalib-0.5.0-py3-none-any.whl
[Info] [Thread 1] Downloading python/de-cf-dnalib/0.6.0/de_cf_dnalib-0.6.0-py3-none-any.whl
[Info] [Thread 0] Downloading python/de-cf-dnalib/0.7.0.dev0/de_cf_dnalib-0.7.0.dev0-py3-none-any.whl
[Info] [Thread 2] Downloading python/de-cf-dnalib/0.7.0/de_cf_dnalib-0.7.0-py3-none-any.whl
{
"status": "success",
"totals": {
"success": 4,
"failure": 0
}
}
Artifactory from Jfrog
fileSpec also supports filtering by Artifactory Query Language (AQL) instead of pattern.
With AQL, you can sort by version of create date, and get only the latest recorded file, for example:
items.find({
"repo": "my-repo",
"name": {"$match":"*.jar"}
}).include("name","created").sort({"$desc": ["created"]}).limit(2)
You can read more about AQL in the following link:
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language
I am a student and try to customize the system calls in gVisor. I have successfully compiled the gVisor on go-branch. And I have got the right message when I change the pkg/sentry/kernel/syslog.go file. Here is the result that can show I have successfully compiled the runsc (the runtime of gVisor).
sudo docker run --runtime=runsc -it ubuntu dmesg
[ 0.000000] asdf Starting gVisor...
[ 0.360765] 6666...
[ 0.529799] 5555...
[ 0.959593] iiiiii...
[ 1.343602] 7777...
[ 1.347068] 4444...
[ 1.424063] 00000...
[ 1.470641] 22222...
[ 1.858755] 99999...
[ 2.213219] 8888...
[ 2.679995] cccccc ..
[ 2.943468] asdf asdf Setting up VFS2...
[ 3.429006] Ready!
And I have noticed the package gvisor/pkg/sentry/syscalls/linux which contains all the syscalls and they are registered in file gvisor/pkg/sentry/syscalls/linux/linux64.go. However, I failed to customize the syscalls in gVisor.
Thanks very much.
Sorry, I am reformulating the question; I was so frustrated by this Error that I posted the question in a snap.
I am trying to use camel-facebook component and using very simple route that figures as such in the blueprint.xml file:
from uri="facebook://me?oAuthAppId={{oAuthAppId}}&oAuthAppSecret={{oAuthAppSecret}}&oAuthAccessToken={{oAuthAccessToken}}&consumer.delay=86400000"/>
I am using :
Red Hat JBoss Developer Studio
Version: 10.1.0.GA
Actually I see the bundle started :
[ 348] [Active ] [Created ] [ ] [ 80] MyApp [fbdemo] (1.0.0.SNAPSHOT)
Also :
[ 333] [Active ] [ ] [ ] [ 50] camel-facebook (2.17.0.redhat-630187)
Perhaps I have the error mentioned above, I put XXXXXXXXX for oAuth*.
2017-02-14 16:02:16,128 | ERROR | 68)-192.168.56.1 | BlueprintCamelContext | 234 - org.apache.camel.camel-blueprint - 2.17.0.redhat-630187 | Error occurred during starting Camel: CamelContext(blueprintContext) due Failed to create route fbRoute: Route(fbRoute)[[From[facebook://me?oAuthAppId={{oAuthAppId}}... because of Failed to resolve endpoint: facebook://me?oAuthAppId=XXXXXXXXXXXXXX
&oAuthAppSecret=XXXXXXXXXXXXXXXXXXXXXX&oAuthAccessToken=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&consumer.delay=86400000 due to: Illegal character in query at index 40: facebook://me?oAuthAppId=XXXXXXXXXXXXXXXXXXXXXXX
&oAuthAppSecret=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&oAuthAccessToken=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&consumer.delay=86400000
org.apache.camel.FailedToCreateRouteException: Failed to create route fbRoute: Route(fbRoute)[[From[facebook://me?oAuthAppId={{oAuthAppId}}... because of Failed to resolve endpoint: facebook://me?oAuthAppId=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
&oAuthAppSecret=XXXXXXXXXXXXXXXXXXXXXXXXXXXX&oAuthAccessToken=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&consumer.delay=86400000 due to: Illegal character in query at index 40: facebook://me?oAuthAppId=XXXXXXXXXXXXXXXX
Shall the oAuthAccessToken be the Application AccesToken or User AccessToken that I get from Facebook Graph Explorer. Note that I don`t have any special character in code secret only a | ( pipe) in case the AccessToken is the Appli AccessToken not User AccessToken. How to figure the index 40.
Thank you very much
If you are using xml then you need to check that all "&" symbols are encoded as "&" so it will looks like:
<from url="facebook://me?oAuthAppId=XXXXXXXXXXXXX&oAuthAppSecret=XXXXXXXXXXXXXXXXXXXXXXXXX&oAuthAccessToken=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&consumer.delay=86400000" />
I have the following environment:
/usr/share/logstash# bin/logstash --path.settings=/etc/logstash -f /etc/logstash/conf.d/stream.conf -V
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties.
logstash 5.0.0
jruby 1.7.25 (1.9.3p551) 2016-04-13 867cb81 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_111-b14 +jit [linux-amd64]
java 1.8.0_111 (Oracle Corporation)
jvm Java HotSpot(TM) 64-Bit Server VM / 25.111-b14
I installed the following 2 repos
logstash-input-websocket version 3.0.2
ruby-ftw http://github.com/jordansissel/ruby-ftw"
I created both gems by running gem install .gemspec file.
My Gemfile under /usr/share/logstash folder was modified to add those 2 lines.
gem "logstash-input-websocket", :path => "/home/xav/source/logstash-input-websocket-master"
gem "ftw", :path => "/home/xav/source/ruby-ftw-master"
I know my logstash configuration is ok ( I checked it with -t option).
Furthermore, on purpose I modified my stream.conf file to omit the url definition for the input websocket to make sure the plugin was being used.
I got the expected error in the /var/log/logstash/logstash-plain.log file as below:
xav#xav-VirtualBox:/var/log/logstash$ tail -f logstash-plain.log
[2016-11-01T11:40:28,998][ERROR][logstash.inputs.websocket] Missing a required setting for the websocket input plugin:
input {
websocket {
url => # SETTING MISSING
...
}
}
[2016-11-01T11:40:29,011][ERROR][logstash.agent ] fetched an invalid config {:config=>"\ninput {\n websocket {\n mode => client\n}\n}\n\noutput {\n\n stdout { }\n}\n\n", :reason=>"Something is wrong with your configuration."
I edited the stream.conf file to add the wss url I want to read the json
output from with:
input {
websocket {
mode => client
url => "wss://<my-url-to-websocket/something"}
}
output {
stdout { }
}
I run logstash again. Everything seems to be working fine BUT I don't get anything in my stdout. The log file output is:
11-01T12:09:25,968][DEBUG][logstash.runner ] -------- Logstash Settings (* means modified) ---------
11-01T12:09:25,980][DEBUG][logstash.runner ] node.name: "xav-VirtualBox"
11-01T12:09:25,981][DEBUG][logstash.runner ] *path.config: "/etc/logstash/conf.d/stream.conf"
11-01T12:09:25,981][DEBUG][logstash.runner ] *path.data: "/var/lib/logstash" (default: "/usr/share/logstash/data")
11-01T12:09:25,981][DEBUG][logstash.runner ] config.test_and_exit: false
11-01T12:09:25,981][DEBUG][logstash.runner ] config.reload.automatic: false
11-01T12:09:25,981][DEBUG][logstash.runner ] config.reload.interval: 3
11-01T12:09:25,982][DEBUG][logstash.runner ] metric.collect: true
11-01T12:09:25,982][DEBUG][logstash.runner ] pipeline.id: "main"
11-01T12:09:25,982][DEBUG][logstash.runner ] pipeline.workers: 1
11-01T12:09:25,982][DEBUG][logstash.runner ] pipeline.output.workers: 1
11-01T12:09:25,983][DEBUG][logstash.runner ] pipeline.batch.size: 125
11-01T12:09:25,983][DEBUG][logstash.runner ] pipeline.batch.delay: 5
11-01T12:09:25,983][DEBUG][logstash.runner ] pipeline.unsafe_shutdown: false
11-01T12:09:25,983][DEBUG][logstash.runner ] path.plugins: []
11-01T12:09:25,984][DEBUG][logstash.runner ] config.debug: false
11-01T12:09:25,984][DEBUG][logstash.runner ] *log.level: "debug" (default: "info")
11-01T12:09:25,984][DEBUG][logstash.runner ] version: false
11-01T12:09:25,984][DEBUG][logstash.runner ] help: false
11-01T12:09:25,984][DEBUG][logstash.runner ] log.format: "plain"
11-01T12:09:25,984][DEBUG][logstash.runner ] http.host: "127.0.0.1"
11-01T12:09:25,984][DEBUG][logstash.runner ] http.port: 9600..9700
11-01T12:09:25,986][DEBUG][logstash.runner ] http.environment: "production"
11-01T12:09:25,986][DEBUG][logstash.runner ] *path.settings: "/etc/logstash" (default: "/usr/share/logstash/config")
11-01T12:09:25,986][DEBUG][logstash.runner ] *path.logs: "/var/log/logstash" (default: "/usr/share/logstash/logs")
11-01T12:09:25,987][DEBUG][logstash.runner ] --------------- Logstash Settings -------------------
11-01T12:09:26,039][DEBUG][logstash.agent ] Agent: Configuring metric collection
11-01T12:09:26,043][DEBUG][logstash.instrument.periodicpoller.os] PeriodicPoller: Starting {:polling_interval=>1, :polling_timeout=>60}
11-01T12:09:26,049][DEBUG][logstash.instrument.periodicpoller.jvm] PeriodicPoller: Starting {:polling_interval=>1, :polling_timeout=>60}
11-01T12:09:26,122][DEBUG][logstash.agent ] Reading config file {:config_file=>"/etc/logstash/conf.d/stream.conf"}
11-01T12:09:26,197][DEBUG][logstash.codecs.json ] config LogStash::Codecs::JSON/#id = "json_982d8975-bd47-4f64-8a68-0da7e7a59a55"
11-01T12:09:26,198][DEBUG][logstash.codecs.json ] config LogStash::Codecs::JSON/#enable_metric = true
11-01T12:09:26,198][DEBUG][logstash.codecs.json ] config LogStash::Codecs::JSON/#charset = "UTF-8"
11-01T12:09:26,202][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#mode = "client"
11-01T12:09:26,202][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#url = "wss://REDACTED"
11-01T12:09:26,202][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#id = "93ebfb1d0936097ee295b418952f2dab3abb3ef8-1"
11-01T12:09:26,203][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#enable_metric = true
11-01T12:09:26,203][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#codec = <LogStash::Codecs::JSON id=>"json_982d8975-bd47-4f64-8a68-0da7e7a59a55", enable_metric=>true, charset=>"UTF-8">
11-01T12:09:26,203][DEBUG][logstash.inputs.websocket] config LogStash::Inputs::Websocket/#add_field = {}
11-01T12:09:26,211][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#id = "line_a255b752-4d76-4933-b4d3-e76e427bbddb"
11-01T12:09:26,214][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#enable_metric = true
11-01T12:09:26,215][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#charset = "UTF-8"
11-01T12:09:26,215][DEBUG][logstash.codecs.line ] config LogStash::Codecs::Line/#delimiter = "\n"
11-01T12:09:26,218][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#id = "93ebfb1d0936097ee295b418952f2dab3abb3ef8-2"
11-01T12:09:26,219][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#enable_metric = true
11-01T12:09:26,219][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#codec = <LogStash::Codecs::Line id=>"line_a255b752-4d76-4933-b4d3-e76e427bbddb", enable_metric=>true, charset=>"UTF-8", delimiter=>"\n">
11-01T12:09:26,219][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#workers = 1
11-01T12:09:26,238][DEBUG][logstash.agent ] starting agent
11-01T12:09:26,242][DEBUG][logstash.agent ] starting pipeline {:id=>"main"}
11-01T12:09:26,650][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
11-01T12:09:26,654][INFO ][logstash.pipeline ] Pipeline main started
11-01T12:09:26,676][DEBUG][logstash.agent ] Starting puma
11-01T12:09:26,682][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
11-01T12:09:26,688][DEBUG][logstash.api.service ] [api-service] start
11-01T12:09:26,748][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
11-01T12:09:27,038][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:27 -0400}
11-01T12:09:28,053][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:28 -0400}
11-01T12:09:29,060][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:29 -0400}
11-01T12:09:30,100][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-01 12:09:30 -0400}
What am I missing?
CENTOS 6.6 x86_64
Gearman v 1.1.8
Settings for gearmand
OPTIONS="--listen=0.0.0.0 \
-q mysql \
--mysql-host=localhost \
--mysql-port=xxx \
--mysql-user=xxx \
--mysql-password=xxx \
--mysql-db=xxx \
--mysql-table=xxx"
I began to notice that creates a queue in the database and is not satisfied.
Check running proccess by:
service gearmand status
gearmand (pid 6419) is running...
Than check error log, it show me that:
ERROR 2015-02-09 08:42:40.000000 [ main ] Timeout occured when calling bind() for 0.0.0.0:4730 -> libgearman-server/gearmand.cc:679
FATAL 2015-02-09 11:42:29.000000 [ main ] pthread_mutex_lock(Invalid argument) -> libgearman-server/gearmand_con.cc:685
ERROR 2015-02-09 15:29:13.000000 [ proc ] GEARMAND_WAKEUP_RUN(Bad file descriptor) -> libgearman-server/gearmand_thread.cc:382
FATAL 2015-02-09 15:29:13.000000 [ main ] pthread_mutex_lock(Invalid argument) -> libgearman-server/gearmand_con.cc:685
ERROR 2015-02-10 14:35:52.000000 [ proc ] GEARMAND_WAKEUP_RUN(Bad file descriptor) -> libgearman-server/gearmand_thread.cc:382
FATAL 2015-02-10 14:35:52.000000 [ main ] pthread_mutex_lock(Invalid argument) -> libgearman-server/gearmand_con.cc:685
FATAL 2015-02-10 15:21:21.000000 [ main ] pthread_mutex_lock(Invalid argument) -> libgearman-server/gearmand_con.cc:685
FATAL 2015-02-11 09:03:29.000000 [ main ] pthread_mutex_lock(Invalid argument) -> libgearman-server/gearmand_con.cc:685
FATAL 2015-02-11 13:52:58.000000 [ main ] pthread_mutex_lock(Invalid argument) -> libgearman-server/gearmand_con.cc:685
FATAL 2015-02-12 10:30:23.000000 [ main ] pthread_mutex_lock(Invalid argument) -> libgearman-server/gearmand_con.cc:685
After restart gearmand all work correctly, and all db queue executed successfuly.
Any help?