swift client delete object failed - openstack-swift

Environment
swift client in CentOS7(10.0.0.2)
bash
[root#bogon ~]# pip2 show python-swiftclient
Name: python-swiftclient
Version: 2.7.0
Summary: OpenStack Object Storage API Client Library
Home-page: http://www.openstack.org/
Author: OpenStack
Author-email: openstack-dev#lists.openstack.org
License: UNKNOWN
Location: /usr/lib/python2.7/site-packages
Requires: futures, six, requests
swift server in CentOS7(10.0.0.4)
bash
[root#bogon ~]# swift --version
python-swiftclient 3.2.1.dev9
Question
swift client login server ,and delete one .jpg file in the container "temporary".
Details
[root#bogon ~]# swift -A http://10.0.0.4:8080/auth/v1.0 -U admin:admin -K admin_pass list
contract
data
mask_contract
temporary
[root#bogon ~]# swift -A http://10.0.0.4:8080/auth/v1.0 -U admin:admin -K 806huayuan list temporary | tail
9f2f8626-a2ad-11e7-ad0b-1866daecc1a0.jpg
a25ebf08-a2b0-11e7-ad0b-1866daecc1a0.jpg
a6cfc990-a2ad-11e7-ad0b-1866daecc1a0.jpg
a8732914-a216-11e7-ad0b-1866daecc1a0.jpg
a87cda6a-77f8-11e7-befe-1866daecc1a0.jpg
ad186efc-a216-11e7-ad0b-1866daecc1a0.jpg
b255e2e6-a216-11e7-ad0b-1866daecc1a0.jpg
d1d010f2-0129-11e8-8cef-1866daecc1a0.jpg
f779a1ea-a2ad-11e7-ad0b-1866daecc1a0.jpg
ff4fbf7e-aa70-11e7-bbe0-1866daecc1a0.jpg
[root#bogon ~]# swift -A http://10.0.0.4:8080/auth/v1.0 -U admin:admin -K 806huayuan delete temporary ff4fbf7e-aa70-11e7-bbe0-1866daecc1a0.jpg
Error Deleting: temporary/f779a1ea-a2ad-11e7-ad0b-1866daecc1a0.jpg: Object DELETE failed: http://10.0.0.4:8080/v1/AUTH_admin/temporary/f779a1ea-a2ad-11e7-ad0b-1866daecc1a0.jpg 409 Conflict [first 60 chars of response] There was a conflict when trying t

Got it!
The reason is that the timestamp assigned to the delete is earlier than the timestamp of the objects.
http://lists.openstack.org/pipermail/openstack-dev/2014-April/033438.html

Related

How to connect python s3fs client to a running Minio docker container?

For test purposes, I'm trying to connect a module that intoduces an absration layer over s3fs with custom business logic.
It seems like I have trouble connecting the s3fs client to the Minio container.
Here's how I created the the container and attach the s3fs client (below describes how I validated the container is running properly)
import s3fs
import docker
client = docker.from_env()
container = client.containers.run('minio/minio',
"server /data --console-address ':9090'",
environment={
"MINIO_ACCESS_KEY": "minio",
"MINIO_SECRET_KEY": "minio123",
},
ports={
"9000/tcp": 9000,
"9090/tcp": 9090,
},
volumes={'/tmp/minio': {'bind': '/data', 'mode': 'rw'}},
detach=True)
container.reload() # why reload: https://github.com/docker/docker-py/issues/2681
fs = s3fs.S3FileSystem(
anon=False,
key='minio',
secret='minio123',
use_ssl=False,
client_kwargs={
'endpoint_url': "http://localhost:9000" # tried 127.0.0.1:9000 with no success
}
)
===========
>>> fs.ls('/')
[]
>>> fs.ls('/data')
Bucket doesnt exists exception
check that the container is running:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e22c19a65 minio/minio "/usr/bin/docker-ent…" 56 seconds ago Up 55 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp hardcore_ride
check that the relevant volume is attached:
➜ ~ docker exec -it 127e22c19a65 bash
[root#127e22c19a65 /]# ls -l /data/
total 4
-rw-rw-r-- 1 1000 1000 4 Jan 11 16:02 foo.txt
[root#127e22c19a65 /]# exit
Since I proved the volume binding is working properly by shelling into the container, I expected to see the same results when attached the container's filesystem via the s3fs client.
What is the bucket name that was created as part of this setup?
From the docs I'm seeing you have to give <bucket_name>/<object_path> syntax to access the resources.
fs.ls('my-bucket')
['my-file.txt']
Also if you look at the docs below there are a couple of other ways to access it using fs.open can you give that a try?
https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf

Is it possible to run Karate test in a pod? If possible, then how?

I just want to know whether I can run Karate test in a pod. Or is there any good suggestion on how to run it?
I tried to run the Karate test in terminal and it works. Just want to know if I can run it from Kubernetes pod. Nginx also running in the pod.
You can everything in pod whatever you are running outside environment. Pod run the container inside it.
So create the docker file and generate the docker image using docker file. Using that docker image and start the karate pod.
You can write the docker file like this
FROM maven:3-jdk-8-alpine
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY settings.xml /usr/share/maven/ref/
COPY pom.xml /tmp/pom.xml
COPY . /usr/src/app
RUN mvn -B -f /tmp/pom.xml -s /usr/share/maven/ref/settings-docker.xml prepare-package -DskipTests
CMD ["/usr/src/app/maven_runner.sh"]
I found here one example : https://github.com/neillfontes/karate-sample
Posting as Community Wiki for future use.
#Harsh Manvar provided good example, however if you will just build it from Dockerfile, you will recieved errors. You have to download all files mentioned in Github. Correct oreder will be:
$ git clone https://github.com/neillfontes/karate-sample.git
$ cd karate-sample
$ docker build -t karate_docker .
After image was built you can check it:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
karate_docker latest 9dc6d7a5278a About a minute ago 136MB
Later you can start testing using:
$ docker run karate_docker
START: Running tests...
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running demo.DemoTest
11:57:49.684 [main] DEBUG c.i.karate.cucumber.CucumberRunner - init test class: class demo.DemoTest
11:57:50.412 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/get-token.feature
11:57:50.663 [main] DEBUG c.i.karate.cucumber.CucumberRunner - loading feature: /usr/src/app/target/test-classes/demo/features/make-request.feature
11:57:53.898 [main] INFO com.intuit.karate.ScriptBridge - karate.env system property was: null
11:57:54.867 [main] DEBUG c.i.k.h.a.RequestLoggingInterceptor -
1 > POST http://brentertainment.com/oauth2/lockdin/token
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Content-Length: 96

Tensorflow Serving: Rest API returns "Malformed request" error

Tensorflow Serving server (run with docker) responds to my GET (and POST) requests with this:
{ "error": "Malformed request: POST /v1/models/saved_model/" }
Precisely the same problem was already reported but never solved (supposedly, this is a StackOverflow kind of question, not a GitHub issue):
https://github.com/tensorflow/serving/issues/1085
https://github.com/tensorflow/serving/issues/1095
Any ideas? Thank you very much.
I verified that this does not work pre-v12 and does indeed work post-v12.
> docker run -it -p 127.0.0.1:9000:8500 -p 127.0.0.1:9009:8501 -v /models/55:/models/55 -e MODEL_NAME=55 --rm tensorflow/serving
> curl http://localhost:9009/v1/models/55
{ "error": "Malformed request: GET /v1/models/55" }
Now try with v12:
> docker run -it -p 127.0.0.1:9000:8500 -p 127.0.0.1:9009:8501 -v /models/55:/models/55 -e MODEL_NAME=55 --rm tensorflow/serving:1.12.0
> curl http://localhost:9009/v1/models/55
{
"model_version_status": [
{
"version": "1541703514",
"state": "AVAILABLE",
"status": {
"error_code": "OK",
"error_message": ""
}
}
]
}
There were two issues with my approach:
1) The status check request wasn't supported in my Tensorflow_model_server (see https://github.com/tensorflow/serving/issues/1085 for details)
2) More importantly, when using Windows you must escape quotation marks in JSON. So instead of:
curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{"instances":[{"features":[1,1,1,1,1,1,1,1,1,1]}]}"
I should have used this:
curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{\"instances\":[{\"features\":[1,1,1,1,1,1,1,1,1,1]}]}"
Depends on your model, but this is what my body looks like:
{"inputs": {"text": ["Hello"]}}
I used Postman to help me out so that it knew it was a JSON.
This is for predict API, so the url ends in ":predict"
Again, that depends on what API you're trying to use.
Model status API is only supported in master branch. There is no TF serving release that supports it yet (the API is slated for upcoming 1.12 release). You can use the nightly docker image (tensorflow/serving:nightly) to test on master branch builds.
This solution gived by netf in issue:1128 in tensorflow/serving.
I already try this solution, it's done and i can get the model status.Getting Model status img(this is the img for model status demo).
Hope I can help you.
If you not clear the master branch builds, you can contact me.
I can give your instruction.
Email:mizeshuang#gmail.com

dashDB Local on fedora 25 - error code 130

I tried 30 day trial of dashDB Local. I followed the steps described in the link:
https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.doc/admin/linux_deploy.html
I did not create a node configuration file because mine is a SMP setup.
Logged into my docker hub account and pulled the image.
docker login -u xxx -p yyyyy
docker pull ibmdashdb/local:latest-linux
The pull took 5 minutes or so. I waited for the image download to complete.
Ran the following command. It completed successfully.
docker run -d -it --privileged=true --net=host --name=dashDB -v /mnt/clusterfs:/mnt/bludata0 -v /mnt/clusterfs:/mnt/blumeta0 ibmdashdb/local:latest-linux
ran logs command
docker logs --follow dashDB
This showed dashDB did not start but exited with error code 130
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f008f8e413d ibmdashdb/local:latest-linux "/usr/sbin/init" 16 seconds ago Exited (130) 1 seconds ago dashDB
#
logs command shows this:
2017-05-17T17:48:11.285582000Z Detected virtualization docker.
2017-05-17T17:48:11.286078000Z Detected architecture x86-64.
2017-05-17T17:48:11.286481000Z
2017-05-17T17:48:11.294224000Z Welcome to dashDB Local!
2017-05-17T17:48:11.294621000Z
2017-05-17T17:48:11.295022000Z Set hostname to <orion>.
2017-05-17T17:48:11.547189000Z Cannot add dependency job for unit systemd-tmpfiles-clean.timer, ignoring: Unit is masked.
2017-05-17T17:48:11.547619000Z [ OK ] Reached target Timers.
<snip>
2017-05-17T17:48:13.361610000Z [ OK ] Started The entrypoint script for initializing dashDB local.
2017-05-17T17:48:19.729980000Z [100209.207731] start_dashDB_local.sh[161]: /usr/lib/dashDB_local_common_functions.sh: line 1816: /tmp/etc_profile-LOCAL.cfg: No such file or directory
2017-05-17T17:48:20.236127000Z [100209.713223] start_dashDB_local.sh[161]: The dashDB Local container's environment is not set up yet.
2017-05-17T17:48:20.275248000Z [ OK ] Stopped Create Volatile Files and Directories.
<snip>
2017-05-17T17:48:20.737471000Z Sending SIGTERM to remaining processes...
2017-05-17T17:48:20.840909000Z Sending SIGKILL to remaining processes...
2017-05-17T17:48:20.880537000Z Powering off.
So it looks like start_dashDB_local.sh is failing at /usr/lib/dashDB_local_common_functions.sh 1816th line? I exported the image and this is the 1816th line of dashDB_local_common_functions.sh
update_etc_profile()
{
local runtime_env=$1
local cfg_file
# Check if /etc/profile/dashdb_env.sh is already updated
grep -q BLUMETAHOME /etc/profile.d/dashdb_env.sh
if [ $? -eq 0 ]; then
return
fi
case "$runtime_env" in
"AWS" | "V1.5" ) cfg_file="/tmp/etc_profile-V15_AWS.cfg"
;;
"V2.0" ) cfg_file="/tmp/etc_profile-V20.cfg"
;;
"LOCAL" ) # dashDB Local Case and also the default
cfg_file="/tmp/etc_profile-LOCAL.cfg"
;;
*) logger_error "Invalid ${runtime_env} value"
return
;;
esac
I also see /tmp/etc_profile-LOCAL.cfg in the image. Did I miss any step here?
I also created /mnt/clusterfs/nodes file ... but it did not help. The same docker run command failed in the same way.
Please help.
I am using x86_64 Fedora25.
# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-common-1.12.6-6.gitae7d637.fc25.x86_64
Go version: go1.7.4
Git commit: ae7d637/1.12.6
Built: Mon Jan 30 16:15:28 2017
OS/Arch: linux/amd64
#
# cat /etc/fedora-release
Fedora release 25 (Twenty Five)
# uname -r
4.10.15-200.fc25.x86_64
#
Thanks for bringing this to our attention. I reached out to our developer team. It seems this is happening because inside the container, tmpfs gets mounted on to /tmp and wipes out all the scripts
We have seen this issue and moving to the latest version of docker seems to fix it. Your docker version commands shows it is an older version.
So please install the latest docker version and retry the deployment of dashdb Local and update here.
Regards
Murali

Orion Context Broker: reset by peer when calling updateContext

Just after the install of the Context Broker I've tried to test it creating a new entity as described in the session Entity Creation of https://forge.fi-ware.org/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_User_and_Programmers_Guide, but I'm getting a "connection reset by peer" error.
The log doesn't seem say anything, even I raised the level of traces with -t 0-255 option.
Aditional info:
$ contextBroker --version
0.14.0
$ ps aux | grep context
/usr/bin/contextBroker -port 1026 -logDir /var/log/contextBroker -pidpath /var/log/contextBroker/contextBroker.pid -dbhost localhost -db orion -t 0-255
The issue was fixed updating the Context Broker to a newer version, in my case to version 0.14.1, which can be found here: http://repositories.testbed.fi-ware.org/repo/rpm/x86_64/.
I can't say exactly what was wrong, since after updating everything was working fine.