Connection to a S3 instance using a service-connector - swisscomdev

I'm trying to create a service-connector to my s3 instance like this:
cf service-connector 13001 mybucketname.ds31s3.swisscom.com:443
But I get the following error:
Server-Error 403: Check of security groups failed (no access)
I have created my service key according to this documentation.
Connecting to my MongoDB works perfectly using a service connector.

You can access Swisscom's S3 directly without the service connector.
The error message suggests that your current org and space do no have access to the S3. This is usually the case is there is no app-binding for that service in the current space. Please check whether you created your service key in the right org and space.

There was a misconfiguration due to security changes. We fixed the issue, so connecting to s3 with the service-connector should now work.

Related

You must first connect to a database to use Browser Sync in neo4j

i deployed the neo4j in kubernetes, so when i tried to acess http://:7474/ it show me this error( ServiceUnavailable: WebSocket connection failure. Due to security constraints in your web browser, the reason for the failure is not available to this Neo4j Driver.), hoe can i solve it?
You need to uncomment a single line inside $NEO4J_HOME/conf/neo4j.conf:
dbms.connector.bolt.address=0.0.0.0:7687
The issue is well described here
If you're using Neo4j Helm instead of neo4j.conf you will have values.yaml. also it is well described here.

How to connect Ksql with ibm-cloud event-stream?

we created a project with ibm functions and event-streams in IBM Cloud.
Now, I am trying to connect KSQL with IBM cloud Event Stream, and I am following along the Document for getting basic ideas of integration.
By following the instructions, I created a file called ksql-server.properties and modified bootstrap.servers, username, password according to my credentials. Then I ran ksql http://localhost:8088 --config-file ksql-server.properties with ksql local cli. I assume everying runs correctly so far since the ksql> shows in the front of every new line...
Then I decided to check if the ksql connected with my ibm cloud by running SHOW topics;
Turns out some error lines:
`Error issuing POST to KSQL server. path:ksql'`
`Caused by: com.fasterxml.jackson.databind.JsonMappingException: Failed to set 'ssl.protocol' to 'TLSv1.2' (through reference chain: io.confluent.ksql.rest.entity.KsqlRequest["streamsProperties"])`
`Caused by: Failed to set 'ssl.protocol' to 'TLSv1.2' (through reference chain: io.confluent.ksql.rest.entity.KsqlRequest["streamsProperties"])
`
`Caused by: Failed to set 'ssl.protocol' to 'TLSv1.2'`
`Caused by: Cannot override property 'ssl.protocol'`
Also, I am quick lost at step 4 when it tells me to:
`Then start DataGen twice as follows:
i. With bootstrap-server=HOSTNAME:PORTNUMBER quickstart=users format=json topic=users maxInterval=10000 to start creating users events.
ii. With bootstrap-server=HOSTNAME:PORTNUMBER quickstart=pageviews format=delimited topic=pageviews maxInterval=10000 to start creating pageviews events.`
Is there anyone have done this before or would love to help me out? Thank you very much!!!
The IBM document is very out of date. KSQL runs as a client/server. The server needs to be run with the details of the broker, and then you can connect to it with a client, including the CLI, REST API, or web interface provided by Confluent Control Center.
So you need to run the KSQL server using your properties file:
./bin/ksql-server-start ksql-server.properties
and then connect to it with the CLI (for example):
./bin/ksql http://localhost:8088
See https://docs.confluent.io/current/ksql/docs/installation/installing.html for more information.

Cannot Delete an AWS VPC

I want to delete an AWS VPC which I don't know how it came into existence. When I try to delete it in AWS Console, it says:
We could not delete the following VPC (vpc-0a72ac71) Network interface
'eni-ce2a0d10' is currently in use. (Service: AmazonEC2; Status Code:
400; Error Code: InvalidParameterValue; Request ID:
821d8a6d-3d9b-4c24-b372-314ea9b18b23)
As it mentions "AmazonEC2" in the error message, I suspected there might be some EC2 instances residing in this VPC. So I went into EC2 dashboard but found no EC2 exist there. However, I found there are two security groups associated with this vpc. So I decided to delete them hoping that's the cause of the error. But when I tried to do so, I got this message:
As the message says, these security groups are associated with some network interfaces. Therefore, I decided to 'Detach' those but I got this error message:
Error deleting network interfaces eni-ce2a0d10: You do not have
permission to access the specified resource. eni-0b7ff712: You do not
have permission to access the specified resource.
But I'm the root user so I assume I should be able to do whatever I want to do except if the resource is made by aws itself or another root account.
I know somewhere this network interface is being used but it will be very time-consuming to go through each aws service and check that.
I've already checked AWS RDS service and no instance or rds subnet is made.
I've already checked this question and this with no luck.
I found the root cause of this issue.
Short Answer:
That VPC was created solely for the WorkDocs service instance. So AWS was preventing me to delete its VPC and any of its dependent services and pieces.
How I figured it out:
First, I noticed something interesting has been written in the 'Description' column of the 'undeletable' Network Interfaces (you can see them in the last OP's figure):
"AWS created network interface for directory d-90672d6b72."
From "directory", I suspected that this might have something do to with AWS Directory Service. So I went to this service and noticed there is a directory associated with the VPC:
So I tried to remove this directory but I got this error message:
Error - Directory cannot be deleted This directory still has
authorized applications, and cannot be deleted.  To delete this
directory, complete all of the following steps: • Delete the WorkDocs
site attached to this directory.
 
Therefore, I went to AWS WorkDocs Service and found it and deleted it:
So now the directory is also deleted (circled in red), I went back to delete those network interfaces. However I realized that they are vanished! (I guess Amazon removed them on its own). I went to VPC service to see whether I'm now able to delete the VPC. Guess what? That VPC was vanished too!
Now I understand what was happening. That VPC was created solely for the WorkDocs service instance. I wish Amazon was more transparent about it.
As a more generic answer to the "Error deleting network interface" issue, it happens when a network interface was created automatically for a higher-level AWS resource.
The Generic solution is to manage the network interface in the higher level resource directly such as WorkDocs or EFS.
In my case it happened when I wanted to delete a security group assigned to network interfaces created by an EFS volume.
So I went in the EFS console and removed the security group from the EFS.

EMR Spark Fails to Save Dataframe to S3

I am using the RunJobFlow command to spin up a Spark EMR cluster. This command sets the JobFlowRole to an IAM Role which has the policies AmazonElasticMapReduceforEC2Role and AmazonRedshiftReadOnlyAccess. The first policy contains an action to allow all s3 permissions.
When the EC2 instances spin up, they assume this IAM role, and generate temporary credentials via STS.
The first thing which I do is read a table from my Redshift cluster into a Spark Dataframe using the com.databricks.spark.redshift format and using the same IAM Role to unload the data from redshift as I did for the EMR JobFlowRole.
So far as I understand, this runs an UNLOAD command on Redshift to dump into the S3 bucket I specify. Spark then loads the newly unloaded data into a Dataframe. I use the recommended s3n:// protocol for the tempdir option.
This command works great, and it always successfully loads the data into the Dataframe.
I then run some transformations and attempt to save the dataframe in the csv format to the same S3 bucket Redshift Unloaded into.
However, when I try to do this, it throws the following error
java.lang.IllegalArgumentException: AWS Access Key ID and Secret Access Key must be specified as the username or password (respectively) of a s3n URL, or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey properties (respectively)
Okay. So I don't know why this happens, but I tried to hack around it by setting the recommended hadoop configuration parameters. I then used DefaultAWSCredentialsProviderChain to load the AWSAccessKeyID and AWSSecretKey and set via
spark.sparkContext.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", <CREDENTIALS_ACCESS_KEY>)
spark.sparkContext.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", <CREDENTIALS_SECRET_ACCESS_KEY>)
When I run it again it throws the following error:
java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: The AWS Access Key Id you provided does not exist in our records. (Service: Amazon S3; Status Code: 403; Error Code: InvalidAccessKeyId;
Okay. So that didn't work. I then removed setting the hadoop configurations and hardcoded an IAM user's credentials in the s3 url via s3n://ACCESS_KEY:SECRET_KEY#BUCKET/KEY
When I ran this it spit out the following error:
java.lang.IllegalArgumentException: Bucket name should be between 3 and 63 characters long
So it tried to create a bucket.. which is definitely not what we want it to do.
I am really stuck on this one and would really appreciate any help here! It works fine when I run it locally, but completely fails on EMR.
The problem was the following:
EC2 Instance Generated Temporary Credentials on EMR Bootstrap Phase
When I queried Redshift, I passed the aws_iam_role to theDatabricks driver. The driver then re-generated temporary credentials for that same IAM role. This invalidated the credentials the EC2 instance generated.
I then tried to upload to S3 using the old credentials (and the credentials which were stored in the instance's metadata)
It failed because it was trying to use out-of-date credentials.
The solution was to remove redshift authorization via aws_iam_role and replace it with the following:
val credentials = EC2MetadataUtils.getIAMSecurityCredentials
...
.option("temporary_aws_access_key_id", credentials.get(IAM_ROLE).accessKeyId)
.option("temporary_aws_secret_access_key", credentials.get(IAM_ROLE).secretAccessKey)
.option("temporary_aws_session_token", credentials.get(IAM_ROLE).token)
On amazon EMR, try usong the prefix s3:// to refer to an object in S3.
It's a long story.

WSO2 API MANAGER clustering Worker-Manager

This is regarding WSO2 API Manager Worker cluster configuration with external Postgres db. I have used 2 databases i.e wso2_carbon for registry and user management and the wso2_am, for storing APIs. Respective xmls have been configured. The postgres scripts have been run to create the database tables. My log console when wso2server.sh is run, shows enabled clustering and the members of the domain. However on the https://: when I try to create to create APIs, it throws and error in the design phase itself.
ERROR - add:jag org.wso2.carbon.apimgt.api.APIManagementException: Error while checking whether context exists
[2016-12-13 04:32:37,737] ERROR - ApiMgtDAO Error while locating API: admin-hello-v.1.2.3 from the database
java.sql.SQLException: org.postgres.Driver cannot be found by jdbc-pool_7.0.34.wso2v2
As per the error message, the driver class name you have given is org.postgres.Driver which is not correct. It should be org.postgresql.Driver. Double check master-datasource.xml config.