I want to delete an AWS VPC which I don't know how it came into existence. When I try to delete it in AWS Console, it says:
We could not delete the following VPC (vpc-0a72ac71) Network interface
'eni-ce2a0d10' is currently in use. (Service: AmazonEC2; Status Code:
400; Error Code: InvalidParameterValue; Request ID:
821d8a6d-3d9b-4c24-b372-314ea9b18b23)
As it mentions "AmazonEC2" in the error message, I suspected there might be some EC2 instances residing in this VPC. So I went into EC2 dashboard but found no EC2 exist there. However, I found there are two security groups associated with this vpc. So I decided to delete them hoping that's the cause of the error. But when I tried to do so, I got this message:
As the message says, these security groups are associated with some network interfaces. Therefore, I decided to 'Detach' those but I got this error message:
Error deleting network interfaces eni-ce2a0d10: You do not have
permission to access the specified resource. eni-0b7ff712: You do not
have permission to access the specified resource.
But I'm the root user so I assume I should be able to do whatever I want to do except if the resource is made by aws itself or another root account.
I know somewhere this network interface is being used but it will be very time-consuming to go through each aws service and check that.
I've already checked AWS RDS service and no instance or rds subnet is made.
I've already checked this question and this with no luck.
I found the root cause of this issue.
Short Answer:
That VPC was created solely for the WorkDocs service instance. So AWS was preventing me to delete its VPC and any of its dependent services and pieces.
How I figured it out:
First, I noticed something interesting has been written in the 'Description' column of the 'undeletable' Network Interfaces (you can see them in the last OP's figure):
"AWS created network interface for directory d-90672d6b72."
From "directory", I suspected that this might have something do to with AWS Directory Service. So I went to this service and noticed there is a directory associated with the VPC:
So I tried to remove this directory but I got this error message:
Error - Directory cannot be deleted This directory still has
authorized applications, and cannot be deleted. To delete this
directory, complete all of the following steps: • Delete the WorkDocs
site attached to this directory.
Therefore, I went to AWS WorkDocs Service and found it and deleted it:
So now the directory is also deleted (circled in red), I went back to delete those network interfaces. However I realized that they are vanished! (I guess Amazon removed them on its own). I went to VPC service to see whether I'm now able to delete the VPC. Guess what? That VPC was vanished too!
Now I understand what was happening. That VPC was created solely for the WorkDocs service instance. I wish Amazon was more transparent about it.
As a more generic answer to the "Error deleting network interface" issue, it happens when a network interface was created automatically for a higher-level AWS resource.
The Generic solution is to manage the network interface in the higher level resource directly such as WorkDocs or EFS.
In my case it happened when I wanted to delete a security group assigned to network interfaces created by an EFS volume.
So I went in the EFS console and removed the security group from the EFS.
Related
(I'm afraid I'm probably about to reveal myself as completely unfit for the task at hand!)
I'm trying to setup a Redshift cluster and database to help manage data for a class/group project.
I have a dc2.large cluster running with either default options, or what looked like the most generic in the couple of place I was forced to make entries.
I have downloaded Aginity (Win64) as it is described as being specialized for Redshift. That said, I can't find any instructions for connecting using it. The connection dialog requests the follwoing:
Server: using the endpoint for my cluster (less :57xx at the end).
UserID: the Master username for the database defined for the cluster.
Password: to match the UserID
SSL Mode (Disable, Allow, Prefer, Require): trying various options
Database: as named in cluster setup
Port: as defined in cluster setup
I can't get it to connect ("failed to establish connection") and don't know if I'm entering something wrong in Aginity or if I haven't set up my cluster properly.
Message: Failed to establish a connection to 'abc1234-smtm.crone7m2jcwv.us-east-1.redshift.amazonaws.com'.
Type : Npgsql.NpgsqlException
Source : Npgsql
Trace : at Npgsql.NpgsqlClosedState.Open(NpgsqlConnector context, Int32 timeout)
at Npgsql.NpgsqlConnector.Open()
at Npgsql.NpgsqlConnection.Open()
at Aginity.MPP.Common.BaseDataProvider.get_Connection()
at Aginity.MPP.Common.BaseDataProvider.CreateCommand(String commandText, CommandType commandType, IDataParameter[] commandParams)
at Aginity.MPP.Common.BaseDataProvider.ExecuteReader(String commandText, CommandType commandType, IDataParameter[] commandParams)
--- Inner Exception: ---
......
It seems there is not enough information going into Aginity to authorize connection to my cluster - no account credential are supplied. For UserID, am I meant to enter the ID of a valid user? Can I use the root account? What would the ID look like? I have setup a User with FullAccess to S3 and Redshift, then entered the UserID in this format
arn:aws:iam::600123456789:user/john
along with the matching password, but that hasn't worked either.
The only training/tutorial I have been able to find/do on this is the Intro AWS direct you to, at https://qwiklabs.com/focuses/2366, which uses a web-based client that I can't find outside of the tutorial (pgweb).
Any advice what I am doing wrong, and how to do it right?
Well, I think I got it working - I haven't had a chance to see if I can actually create table yet, but it seems to be connected. I had to allow inbound traffic from outside the VPC, as per the above snapshot.
I'm guessing there's a better way than opening it up to all IP addresses, but I don't know the users' (fellow team members) IPs, and aren't they all subject to change depending on the device they're using to connect?
How does one go about getting inside the VPC to connect that way, presumably more securely?
I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.
I'm trying to create a service-connector to my s3 instance like this:
cf service-connector 13001 mybucketname.ds31s3.swisscom.com:443
But I get the following error:
Server-Error 403: Check of security groups failed (no access)
I have created my service key according to this documentation.
Connecting to my MongoDB works perfectly using a service connector.
You can access Swisscom's S3 directly without the service connector.
The error message suggests that your current org and space do no have access to the S3. This is usually the case is there is no app-binding for that service in the current space. Please check whether you created your service key in the right org and space.
There was a misconfiguration due to security changes. We fixed the issue, so connecting to s3 with the service-connector should now work.
I'm performing REST API operation Start Role (http://msdn.microsoft.com/en-us/library/jj157189.aspx)
In the link https://management.core.windows.net/{subscription-id}/services/hostedservices/{service-name}/deployments/{deployment-name}/roles/{role-name}/Operations we have replaced {service-name}, {deployment-name} and {role-name} with name of VM.
In result we have next message:
"ResourceNotFoundThe resource service name hostedservices is not supported."
List Hosted Services operation (http://msdn.microsoft.com/en-us/library/windowsazure/ee460781.aspx) shows us that we have 2 WMs as hosted services.
Get Role operaion (http://msdn.microsoft.com/en-us/library/jj157193.aspx) also gives info about each of VMs.
Thanks in advance.
You are using:
{subscription-id}/services/hostedservices/{service-name}/deployments/{deployment-name}/roles/{role-name}/Operations
But the correct Uri is:
{subscriptionID}/services/hostedservices/{serviceName}/deployments/{deploymentName}/roleInstances/{roleInstanceName}/Operations
See the difference?
I haven't worked with this particular operation, however a few things:
service-name: It should be the name of the hosted service (the one with .cloudapp.net) and what you see when you list your hosted service.
deployment-name: Generally speaking it's a GUID returned by Get Deployment operation (http://msdn.microsoft.com/en-us/library/windowsazure/ee460804.aspx).
role-name: Role name is also returned when you do a Get Deployment operation. You should use that. I'm not sure if it is same as the name of your VM.
Can you retry your operation after changing these values?
In my case, deployment name is the name of the first VM I created in this cloud service. So, if I added 3 machines to the same cloud service, all of them have the same deployment name - the name of the first machine.
I have followed the instructions at https://github.com/cloudfoundry/oss-docs/tree/master/vcap/adding_a_system_service and copied the echo service and created my new service. (That document is somewhat out-of-date in that "excluded components" no longer exists.
In any case, my service shows up as running with a gateway and a node when I look at 'vcap status' on the server. However, when I look at 'vmc services' from the client my service is not in the list. Where is this list maintained and why is my service not on the list?
Various services, including blob, filesystem, mongodb, etc, are shown on the 'vcm services' list even though they have never been included in my config. Where is this maintained and why are other services on this list?
The cloud_controller.log file shows a "Create service request:" for echo every minute. This service is not in my config file (it was once but it was removed and I repeated the deployment). What is prompting this request for a service that was not defined in the config?
The _gateway.log for my service shows the following:
INFO -- Sending info to cloud controller: ...api.vcap.me/services/v1/offerings
INFO -- Fetching handles from cloud controller .../offerings/.../handles
ERROR -- Failed registering with cloud controller, status=400
DEBUG -- [GaaS-Provisioner] Connected to node mbus..
ERROR -- Failed fetching handles, status=404
Why does my gateway fail to register with the cloud controller? I have found some reports that suggest that the problem is with domain name mapping. I have verified that the server can find itself:
$curl api.vcap.me
Welcome to VMware's Cloud Application Platform
What can I do to register my service?
You can also try asking your question on the vcap_dev google group.
https://groups.google.com/a/cloudfoundry.org/forum/?fromgroups#!forum/vcap-dev
They are focused in answering and discussing OSS subjects for Cloud Foundry!
If you follow the document correctly things should work just fine. I understand that the mechanism for maintaining the excluded list of components has changed and can be a point of confusion when following the steps mentioned in the article (just ignore that step totally).
ERROR -- Failed registering with cloud controller, status=400
Well this is a point of worry. I recently followed the article step by step and was able to add a new service.
Is the echo service showing up in vmc services?
Have you copied the the yml files for node and gateway at ./cloudfoundry/.deployments/devbox/config?
Are the tokens for your gateway unique? and matching in the two files? ./cloudfoundry/.deployments/devbox/config/cloud_controller.yml and ./cloudfoundry/.deployments/devbox/config/**_gateway.yml**
I would recommend that you first concentrate on getting the echo service to be listed in the vmc services output. Once done with this you should replicate the steps (with absolute care to modify things like the token) to get your custom service working.
Cheers,
Ankit
You should follow this guide
It work to me.
regards.