I got an error when try to launch Julyterlab on Dataproc. it shows 403 error - Cannot read property 'path' of undefined
Any idea what would be the issue?
Error screen 1
Error screen 2
The glcoud command as follow:
gcloud beta dataproc clusters create
--enable-component-gateway
--bucket
--region asia-southeast1
--zone asia-southeast1-c
--master-machine-type n1-standard-2 --master-boot-disk-size 500
--num-workers 2 --worker-machine-type n1-standard-2 --worker-boot-disk-size 500
--image-version 1.5-debian10 --optional-components ANACONDA,JUPYTER
--scopes 'https://www.googleapis.com/auth/cloud-platform'
--project
You cannot create a notebook under the root directory. Do it under GCS or Local Disk.
Related
For test purposes, I'm trying to connect a module that intoduces an absration layer over s3fs with custom business logic.
It seems like I have trouble connecting the s3fs client to the Minio container.
Here's how I created the the container and attach the s3fs client (below describes how I validated the container is running properly)
import s3fs
import docker
client = docker.from_env()
container = client.containers.run('minio/minio',
"server /data --console-address ':9090'",
environment={
"MINIO_ACCESS_KEY": "minio",
"MINIO_SECRET_KEY": "minio123",
},
ports={
"9000/tcp": 9000,
"9090/tcp": 9090,
},
volumes={'/tmp/minio': {'bind': '/data', 'mode': 'rw'}},
detach=True)
container.reload() # why reload: https://github.com/docker/docker-py/issues/2681
fs = s3fs.S3FileSystem(
anon=False,
key='minio',
secret='minio123',
use_ssl=False,
client_kwargs={
'endpoint_url': "http://localhost:9000" # tried 127.0.0.1:9000 with no success
}
)
===========
>>> fs.ls('/')
[]
>>> fs.ls('/data')
Bucket doesnt exists exception
check that the container is running:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e22c19a65 minio/minio "/usr/bin/docker-ent…" 56 seconds ago Up 55 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp hardcore_ride
check that the relevant volume is attached:
➜ ~ docker exec -it 127e22c19a65 bash
[root#127e22c19a65 /]# ls -l /data/
total 4
-rw-rw-r-- 1 1000 1000 4 Jan 11 16:02 foo.txt
[root#127e22c19a65 /]# exit
Since I proved the volume binding is working properly by shelling into the container, I expected to see the same results when attached the container's filesystem via the s3fs client.
What is the bucket name that was created as part of this setup?
From the docs I'm seeing you have to give <bucket_name>/<object_path> syntax to access the resources.
fs.ls('my-bucket')
['my-file.txt']
Also if you look at the docs below there are a couple of other ways to access it using fs.open can you give that a try?
https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf
Im publishing a project via docker compose to AWS ECR but it fails on the last couple of steps. Its based on the new "docker compose" integration with an AWS context
The error i receive is:
MicroservicedocumentGeneratorService TaskFailedToStart: ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3 time(s): RequestError: send request failed caused by: Post https://api.ecr....
The image is in an ECR private repository along with the others from the compose file.
I have authenticated with:
aws ecr get-login-password
The docker compose is:
microservice_documentGenerator:
image: xxx.dkr.ecr.xxx.amazonaws.com/microservice_documentgenerator:latest
networks:
- publicnet
The original dockerfile is
FROM openjdk:11-jdk-slim
COPY /Microservice.DocumentGenerator/Microservice.DocumentGenerator.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The output for before the error was:
[+] Running 54/54
- projext DeleteComplete 355.3s
- PublicnetNetwork DeleteComplete 310.5s
- LogGroup DeleteComplete 306.1s
- MicroservicedocumentGeneratorTaskExecutionRole DeleteComplete 272.2s
- MicroservicedocumentGeneratorTaskDefinition Del... 251.2s
- MicroservicedocumentGeneratorServiceDiscoveryEntry DeleteComplete 220.1s
- MicroservicedocumentGeneratorService DeleteComp... 211.9s
try authentication with:
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
Plus can you mention from where you are making the call and if the server has the permission to make the call to ECR?
When trying to create Dataproc cluster with custom image I am getting this ...
│ Error: Error creating Dataproc cluster: googleapi: Error 400: Failed to resolve image version 'my-ubuntu18-custom'. Accepted image versions: [preview, 2.0-centos, 1.2-deb9, 1.3-debian9, 1.2-debian9, 1.5-debian10, 1.5-centos8, 2.0-ubuntu18, 1.0-debian9, 1.1-debian9, 1.5-ubuntu18, preview-centos8, preview-debian10, preview-ubuntu18, 2.0-debian, 1.4-debian9, preview-debian, 2.0-debian10, 1.1-debian, 1.0-debian, 1.2-debian, preview-ubuntu, 1.5-centos, 1.3-deb9, 1.4-ubuntu18, 1.5-ubuntu, preview-centos, 1.4-debian10, 1.1-deb9, 1.3-ubuntu, 1.4-ubuntu, 1.0, 1.1, 2.0, 1.2, 1.3, 1.3-debian10, 1.4, 2.0-ubuntu, 1.5, 1.4-debian, 1.5-debian, 1.0-deb9, 1.3-debian, 1.3-ubuntu18, 2.0-centos8]. See https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions for additional information on image versioning., badRequest
The custom images list shows the custom image however ...
$ gcloud compute images list --no-standard-images | grep NAME:
NAME: my-ubuntu18-custom
You have to use the --image instead of --image-version flag if creating the Dataproc cluster using gcloud, and you have to specify the full image URI instead of just the short name.
You can find the full URI by looking for the "selfLink" when getting the full details of the image:
gcloud compute images describe my-ubuntu18-custom | grep selfLink
I'm trying to upload files into my IBM Cloud object store using cli. The command is the following:
:~ ibmcloud cos object-put --bucket Backup --body Downloads/DRIVING_MIVUE/Normal/F/FILE201217-151749F.MP4
FAILED
Mandatory Flag '--key' is missing
NAME:
ibmcloud cos object-put - Upload an object to a bucket.
USAGE:
ibmcloud cos object-put --bucket BUCKET_NAME --key KEY [--body FILE_PATH] [--cache-control CACHING_DIRECTIVES] [--content-disposition DIRECTIVES] [--content-encoding CONTENT_ENCODING] [--content-language LANGUAGE] [--content-length SIZE] [--content-md5 MD5] [--content-type MIME] [--metadata STRUCTURE] [--region REGION] [--output FORMAT] [--json]
OPTIONS:
--bucket BUCKET_NAME The name (BUCKET_NAME) of the bucket.
--key KEY The KEY of the object.
...
What does KEY mean here?
I tried to provide a string, like below, but I got an error.
ibmcloud cos object-put --bucket Backup --body Downloads/DRIVING_MIVUE/Normal/F/FILE201217-151749F.MP4 --key FILE201217-151749F
FAILED
The specified key does not exist.
The object key (or key name) uniquely identifies the object in a bucket. The following are examples of valid object key names:
4my-organization
my.great_photos-2014/jan/myvacation.jpg
videos/2014/birthday/video1.wmv
For example, when I run the below command
ibmcloud cos object-put --bucket vmac-code-engine-bucket --region us-geo --key test/package.json --body package.json
The file package.json on my machine will be uploaded to the test
folder(directory) of COS bucket vmac-code-engine-bucket
Optionally, you can also pass the MAP of metadata to store
{
"file_name": "file_20xxxxxxxxxxxx45.zip",
"label": "texas",
"state": "Texas",
"Date_to": "2019-11-09T16:00:00.000Z",
"Sha256sum": "9e39dxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8ce6b68ede3a47",
"Timestamp": "Thu, 17 Oct 2019 09:22:13 GMT"
}
For other parameters, refer the command documentation here
For more information, refer the documentation here
Based on what I have observed:-
Key should be Name of the object.
Body should be file path of the object that needs to be uploaded.
I'm trying to upgrade my gke cluster from this command:
gcloud container clusters upgrade CLUSTER_NAME --cluster-version=1.15.11-gke.3 \
--node-pool=default-pool --zone=ZONE
I get the following output:
Upgrading test-upgrade-172615287... Done with 0 out of 5 nodes (0.0%): 2 being processed...done.
Timed out waiting for operation <Operation
clusterConditions: []
detail: u'Done with 0 out of 5 nodes (0.0%): 2 being processed'
name: u'operation-NUM-TAG'
nodepoolConditions: []
operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4)
progress: <OperationProgress
metrics: [<Metric
intValue: 5
name: u'NODES_TOTAL'>, <Metric
intValue: 0
name: u'NODES_FAILED'>, <Metric
intValue: 0
name: u'NODES_COMPLETE'>, <Metric
intValue: 0
name: u'NODES_DONE'>]
stages: []>
…
status: StatusValueValuesEnum(RUNNING, 2)
…>
ERROR: (gcloud.container.clusters.upgrade) Operation [DATA_SAME_AS_IN_TIMEOUT] is still running
I just discovered gcloud config set builds/timeout 3600 so I hope this doesn't happen again, like in my CI. But if it does, is there a gcloud command that lets me know that the upgrade is still in progress? These two didn't provide that:
gcloud container clusters describe CLUSTER_NAME --zone=ZONE
gcloud container node-pools describe default-pool --cluster=CLUSTER_NAME --zone=ZONE
Note: Doing this upgrade in the console took 2 hours so I'm not surprised the command-line attempt timed out. This is for a CI, so I'm fine looping and sleeping for 4 hours or so before giving up. But what's the command that will let me know when the cluster is being upgraded, and when it either finishes or fails? The UI is showing the cluster is still undergoing the upgrade, so I assume there is some command.
TIA as usual
Bumped into the same issue.
All gcloud commands, including gcloud container operations wait OPERATION_ID(https://cloud.google.com/sdk/gcloud/reference/container/operations/wait), have the same 1-hour timeout.
At this point, there is no other way to wait for the upgrade to complete than to query gcloud container operations list and check the STATUS in a loop.