Google Dataflow Pipeline creation fails with 400: Bad Request / invalid grant - google-cloud-storage

I have been building and creating templates for google dataflow for over a year now. I never had a problem creating templates and uploading them to gcs with the options.setTemplateLocation(templatePath); call. Since today, when creating the Pipeline with Pipeline.create(options); and running the java-program in eclipse, I get following exception:
Exception in thread "main" java.lang.RuntimeException: Failed to construct instance from factory method DataflowRunner#fromOptions(interface org.apache.beam.sdk.options.PipelineOptions)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:233)
at org.apache.beam.sdk.util.InstanceBuilder.build(InstanceBuilder.java:162)
at org.apache.beam.sdk.PipelineRunner.fromOptions(PipelineRunner.java:52)
at org.apache.beam.sdk.Pipeline.create(Pipeline.java:142)
at mypackage.PipelineCreation.getTemplatePipeline(PipelineCreation.java:34)
at myotherpackage.Main.main(Main.java:51)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.beam.sdk.util.InstanceBuilder.buildFromMethod(InstanceBuilder.java:222)
... 5 more
Caused by: java.lang.RuntimeException: Unable to verify that GCS bucket gs://my-projects-staging-bucket exists.
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.verifyPathIsAccessible(GcsPathValidator.java:92)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.validateOutputFilePrefixSupported(GcsPathValidator.java:61)
at org.apache.beam.runners.dataflow.DataflowRunner.fromOptions(DataflowRunner.java:228)
... 10 more
Caused by: com.google.api.client.http.HttpResponseException: 400 Bad Request
{
"error" : "invalid_grant",
"error_description" : "Bad Request"
}
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1070)
at com.google.auth.oauth2.UserCredentials.refreshAccessToken(UserCredentials.java:207)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:149)
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:135)
at com.google.auth.http.HttpCredentialsAdapter.initialize(HttpCredentialsAdapter.java:96)
at com.google.cloud.hadoop.util.ChainingHttpRequestInitializer.initialize(ChainingHttpRequestInitializer.java:52)
at com.google.api.client.http.HttpRequestFactory.buildRequest(HttpRequestFactory.java:93)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.buildHttpRequest(AbstractGoogleClientRequest.java:300)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.hadoop.util.ResilientOperation$AbstractGoogleClientRequestExecutor.call(ResilientOperation.java:166)
at com.google.cloud.hadoop.util.ResilientOperation.retry(ResilientOperation.java:66)
at org.apache.beam.sdk.util.GcsUtil.getBucket(GcsUtil.java:505)
at org.apache.beam.sdk.util.GcsUtil.bucketAccessible(GcsUtil.java:492)
at org.apache.beam.sdk.util.GcsUtil.bucketAccessible(GcsUtil.java:457)
at org.apache.beam.sdk.extensions.gcp.storage.GcsPathValidator.verifyPathIsAccessible(GcsPathValidator.java:88)
... 12 more
I was logged-in today with another account into gcloud but logged in again with the account associated with the project as "Owner" with gcloud auth login.
I also restarted Eclipse but the same error keeps occuring. Also when trying to run the pipeline locally, I get another error but also with the "invalid_grant" "bad request" content. Restarting the laptop also had no effect.
My pom defines the google-cloud-dataflow-java-sdk-all with version 2.2.0 and upgrading to 2.5.0 had no effect.
I am able to copy data to the bucket with gsutil from commandline. But when running the java-program from command-line with mvn compile exec:java -Dexec.mainClass=mypackage.Main i still get the same errors.
My function to create a templatePipeline looks like the following:
public static Pipeline getTemplatePipeline(String jobName, String templatePath){
DataflowPipelineOptions options = PipelineOptionsFactory.as(DataflowPipelineOptions.class);
options.setProject("my-project-id");
options.setRunner(DataflowRunner.class);
options.setStagingLocation("gs://my-projects-staging-bucket/binaries");
options.setTempLocation("gs://my-projects-staging-bucket/binaries/tmp");
options.setGcpTempLocation("gs://my-projects-staging-bucket/binaries/tmp");
options.setZone("europe-west3-a");
options.setWorkerMachineType("n1-standard-2");
options.setJobName(jobName);
options.setMaxNumWorkers(2);
options.setDiskSizeGb(40);
options.setTemplateLocation(templatePath);
return Pipeline.create(options);
}
Any help is highly appreciated.

You don't have to use service account and still you can use gcloud, you should use the following command and login with your account:
gcloud auth application-default login

I found the solution in the quickstart docs.
It seems like the gcloud auth is no longer used and you have to use a service account. So like in the docs I created a service account with role "project/owner" and downloaded it's json file to $path.
Then on my Mac i used export GOOGLE_APPLICATION_CREDENTIALS="$path" and within the same session used the command mentioned in the question to compile and execute the java-program.

Related

OpenSource: Encryption of JDBC Password in configuration properties file

As I noticed a plugin available for the enterprise version (https://download.rundeck.com/plugins/encrypted-datasource-plugin.html); is there an option for users of Rundeck open source to perform the same kind of encyption of datasource password in the configuration file?
As I noticed many people mentioning writing their own java programs and leveraging the Jasypt utilities; I tried this. I do have two jar files (one for encrypt and one for decrypt). I created a directory (since I'm using rpm based Rundeck 3.3 installation) called: /var/lib/rundeck/lib . I added this directory to the JVM classpath in /etc/sysconfig/rundeckd via: export RDECK_JVM_SETTINGS="-Djava.class.path=/var/lib/rundeck/lib/*". I converted my /etc/rundeck/rundeck-config.properties file to groovy format and updated the /etc/sysconfig/rundeck with: export RDECK_CONFIG_FILE="/etc/rundeck/rundeck-config.groovy". However when I change the /etc/rundeck/rundeck-config.groovy entry for datasource.password to:
datasource.password=MyDecrypt("MyTest123Password"); I get an error in the Rundeck logs after restarting:
[2020-09-08T18:01:03,168] WARN context.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'application': Initialization of bean failed; nested exception is groovy.lang.MissingMethodException: No signature of method: groovy.util.ConfigSlurper$_parse_closure5.MyDecrypt() is applicable for argument types: (String) values: [MyTest123Password]
Any suggestions?
That's encryption is only for Rundeck Enterprise, perhaps the best approach on Rundeck Community is to secure the rundeck-config.properties file through file UNIX permissions.

How to register app from private repo in Spring Cloud dataflow 2.6.1

I'm using SCDF 2.6.1 in Openshift 3, and I'm facing error while registering the app, error log like below :
java.lang.NullPointerException: null
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getRegistryRequest(DefaultContainerImageMetadataResolver.java:162)
at org.springframework.cloud.dataflow.configuration.metadata.container.DefaultContainerImageMetadataResolver.getImageLabels(DefaultContainerImageMetadataResolver.java:110)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.resolvePortNamesFromContainerImage(BootApplicationConfigurationMetadataResolver.java:215)
at org.springframework.cloud.dataflow.configuration.metadata.BootApplicationConfigurationMetadataResolver.listPortNames(BootApplicationConfigurationMetadataResolver.java:163)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.getInfo(AppRegistryController.java:193)
at org.springframework.cloud.dataflow.server.controller.AppRegistryController.info(AppRegistryController.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I checked the line of code in DefaultContainerImageMetadataResolver.java:162
// Convert the image name into a well-formed ContainerImage
ContainerImage containerImage = this.containerImageParser.parse(imageName);
// Find a registry configuration that matches the image's registry host
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
// Retrieve a registry authorizer that supports the configured authorization type.
RegistryAuthorizer registryAuthorizer = this.registryAuthorizerMap.get(registryConf.getAuthorizationType());
I'm pretty sure the error is because registryConf is null as result from
RegistryConfiguration registryConf = this.registryConfigurationMap.get(containerImage.getRegistryHost());
How to put my private repo URI in registryConfigurationMap ?
I have tried to put imagePullSecret in the deployment.yml which is registered with the private repo, but I think it doesn't work because in the startup log, I still see :
2020-09-03 04:55:24.111 INFO 1 --- [ main] urationMetadataResolverAutoConfiguration :
Final Registry Configurations: {registry-1.docker.io=RegistryConfiguration{registryHost='registry-1.docker.io', user='null', secret='****'', authorizationType=dockeroauth2, manifestMediaType='application/vnd.docker.distribution.manifest.v2+json', disableSslVerification='false',
extra={registryAuthUri=https://auth.docker.io/token?service=registry.docker.io&scope=repository:{repository}:pull&offline_token=1&client_id=shell }}}
The only place where SCDF server downloads the container image layer is when it looks for app metadata.
Currently, this is configured to use the docker registry host (as this is where all the out-of-the-box applications are hosted).
If you want to override, you can modify these property values at the time of server startup and proceed.
Remember the fact that this configuration is only needed to download the app metadata layer of the image - not to download the entire container image at the SCDF server side.

SAM Deployment failed Error- Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

When I try to deploy package on SAM, the very first status comes in cloud formation console is ROLLBACK_IN_PROGRESS after that it gets changed to ROLLBACK_COMPLETE
I have tried deleting the stack and trying again, but every time same issue occurs.
Error in terminal looks like this-
Sourcing local options from ./SAMToolkit.devenv
SAM_PARAM_PKG environment variable not set
SAMToolkit will operate in legacy mode.
Please set SAM_PARAM_PKG in your .devenv file to run modern packaging.
Run 'sam help package' for more information
Runtime: java
Attempting to assume role from AWS Identity Broker using account 634668058279
Assumed role from AWS Identity Broker successfully.
Deploying stack sam-dev* from template: /home/***/1.0/runtime/sam/template.yml
sam-additional-artifacts-url.txt was not found, which is fine if there is no additional artifacts uploaded
Replacing BATS::SAM placeholders in template...
Uploading template build/private/tmp/sam-toolkit.yml to s3://***/sam-toolkit.yml
make_bucket failed: s3://sam-dev* An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: build/private/tmp/sam-toolkit.yml to s3://sam-dev*/sam-toolkit.yml
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id sam-dev* does not exist
sam-dev* will be created.
Creating ChangeSet ChangeSet-2020-01-20T12-25-56Z
Deploying stack sam-dev*. Follow in console: https://aws-identity-broker.amazon.com/federation/634668058279/CloudFormation
ChangeSet ChangeSet-2020-01-20T12-25-56Z in sam-dev* succeeded
"StackStatus": "REVIEW_IN_PROGRESS",
sam-dev* reached REVIEW_IN_PROGRESS
Deploying stack sam-dev*. Follow in console: https://console.aws.amazon.com/cloudformation/home?region=us-west-2
Waiting for stack-create-complete
Waiter StackCreateComplete failed: Waiter encountered a terminal failure state
Command failed.
Please see the logs above.
I set SQS as event source for Lambda, but didn't provided the permissions like this
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource: "*"
in lambda policies.
I found this error in "Events" tab of "CloudFormation" service.

Jenkins CORS Filter plugin not adding Access-Control-Allow-Origins header

I am trying to add CORS support to my Jenkins server so I could access the REST API from the browser. From looking around, the recommended approach is to use the CORS Filter plugin.
I have installed it, enable it, and add http://localhost to the Access-Control-Allow-Origins field, as well as GET to Access-Control-Allow-Methods field. However, these headers are not showing up in my requests.
This plugin has not been updated in a few years, so I'm not sure if it's compatible with the latest version of Jenkins. I'm running version 2.172.
In the Jenkins system log, I see these errors, not sure if it's related/ relevant
Caught exception evaluating: descriptor.getPropertyType(instance,field).itemTypeDescriptorOrDie in /configure. Reason: java.lang.reflect.InvocationTargetException
java.lang.AssertionError: class hudson.ivy.IvyBuildTrigger$IvyConfiguration is missing its descriptor in public hudson.ivy.IvyBuildTrigger$IvyConfiguration[] hudson.ivy.IvyBuildTrigger$DescriptorImpl.getConfigurations(). See https://jenkins.io/redirect/developer/class-is-missing-descriptor
Caught exception evaluating: h.filterDescriptors(it,attrs.descriptors) in /configure. Reason: java.lang.NullPointerException: Descriptor list is null for context 'class hudson.model.Hudson' in thread 'Handling GET /configure from 100.71.26.18 : qtp589873731-14 Jenkins/configure.jelly GlobalLibraries/config.jelly LibraryConfiguration/config.jelly SCMRetriever/DescriptorImpl/config.jelly MultiSCM/DescriptorImpl/config.jelly'
java.lang.NullPointerException: Descriptor list is null for context 'class hudson.model.Hudson' in thread 'Handling GET /configure from 100.71.26.18 : qtp589873731-14 Jenkins/configure.jelly GlobalLibraries/config.jelly LibraryConfiguration/config.jelly SCMRetriever/DescriptorImpl/config.jelly MultiSCM/DescriptorImpl/config.jelly'
These errors have at org.jenkinsci.plugins.corsfilter.AccessControlsFilter.doFilter(AccessControlsFilter.java:79) in their stack trace.
Does anyone know of a good way to enable CORS support for Jenkins REST API?
Jenkins version 2.303 and I struggle with the same issue. I would recommend to add the port number on the localhost URI you defined in the plugin settings, even though I'm pretty sure that won't change anything.
It actually seems the plugin is having no effect at all.
Probably the best solution is to set up your own proxy which would take care of CORS. Here is a good and well document example Build a Node.js Proxy Server in Under 10 minutes!

ClientResponseException in OpenStack4J on IBM Bluemix Object Storage service

I am following this guide to connect to IBM Object Storage for Bluemix with Java:
https://developer.ibm.com/recipes/tutorials/connecting-to-ibm-object-storage-for-bluemix-with-java/
I have double checked the values with the credentials in the service but when I execute the authenticate() method I get following exception:
Caused by: ClientResponseException{message=Not Found, status=404, status-code=NOT_FOUND}
at org.openstack4j.core.transport.HttpExceptionHandler.mapException(HttpExceptionHandler.java:38)
at org.openstack4j.core.transport.HttpExceptionHandler.mapException(HttpExceptionHandler.java:23)
at org.openstack4j.openstack.internal.OSAuthenticator.authenticateV3(OSAuthenticator.java:158)
at org.openstack4j.openstack.internal.OSAuthenticator.invoke(OSAuthenticator.java:70)
at org.openstack4j.openstack.client.OSClientBuilder$ClientV3.authenticate(OSClientBuilder.java:165)
at org.openstack4j.openstack.client.OSClientBuilder$ClientV3.authenticate(OSClientBuilder.java:128)
at com.servengine.objectstorage.ObjectStorageClient.postConstruct(ObjectStorageClient.java:32)
... 85 more
Anyway I can know what is wrong? (URL, userId, password, project, domain, ...)
Thanks