I create a subnet before i create an instance by python script.
def subnet_create(self, network_id, **kwargs):
params = {'name': kwargs.get("name"),
'cidr': kwargs.get("cidr"),
'ip_version': 4,
'enable_dhcp': True}
body = {'subnet': {'network_id': network_id}}
body['subnet'].update(params)
return self.neutron_client.create_subnet(body=body).get("subnet")
after that i create an instance by code:
compute_srv = self.nova_cli.instance_create(
compute_inst["name"],
compute_inst["image"],
compute_inst["flavor"],
key_name=compute_inst["key_name"],
user_data=compute_inst["user_data"],
security_groups=compute_inst["security_groups"],
nics=compute_inst["nics"])
the horizon shows the subnet and the instance was created successfully, but when i open the console page, sometimes the terminal shows that the instance cannot attach to the metadata server:
calling 'http://169.254.2169.254/2009-04-04/metadata/instance-id' failed
the current instance can not ping the subnet gateway, but instance from other subnet can ping this instance.
This problem is not present every time. some other times it may shows the instance has attached to the metadata server, and the hostname、ip address can set right.
But when i create the subnet manually by horizon,the instances always able to connect the metadata server.
I so confused, and i have stuck here for about a whole week.Does anyone knows why? Thank you very much.
Related
I am using the googleapiclient.discovery as client to connect to GCP. Ideally, I would like to retrieve a virtual machine by it's
zone
project
name
I am having a difficult time finding code samples that do this. I am initializing the client like so
client = googleapiclient.discovery.build('compute', 'v1')
I've exported the environment variable GOOGLE_APPLICATION_CREDENTIALS and I am able to successfully connect to GCP. However, I am unable to fetch an instance by it's name. I am looking for a method like
instance = client.compute.instances().get("project","zone","instance_name")
Any help with this would be greatly appreciated.
Just need to set up a client with discovery like so
compute = discovery.build('compute', 'v1', credentials=credential)
getinstance = compute.instances().get(project=project_id, zone=region, instance=instance_id).execute()
(I'm afraid I'm probably about to reveal myself as completely unfit for the task at hand!)
I'm trying to setup a Redshift cluster and database to help manage data for a class/group project.
I have a dc2.large cluster running with either default options, or what looked like the most generic in the couple of place I was forced to make entries.
I have downloaded Aginity (Win64) as it is described as being specialized for Redshift. That said, I can't find any instructions for connecting using it. The connection dialog requests the follwoing:
Server: using the endpoint for my cluster (less :57xx at the end).
UserID: the Master username for the database defined for the cluster.
Password: to match the UserID
SSL Mode (Disable, Allow, Prefer, Require): trying various options
Database: as named in cluster setup
Port: as defined in cluster setup
I can't get it to connect ("failed to establish connection") and don't know if I'm entering something wrong in Aginity or if I haven't set up my cluster properly.
Message: Failed to establish a connection to 'abc1234-smtm.crone7m2jcwv.us-east-1.redshift.amazonaws.com'.
Type : Npgsql.NpgsqlException
Source : Npgsql
Trace : at Npgsql.NpgsqlClosedState.Open(NpgsqlConnector context, Int32 timeout)
at Npgsql.NpgsqlConnector.Open()
at Npgsql.NpgsqlConnection.Open()
at Aginity.MPP.Common.BaseDataProvider.get_Connection()
at Aginity.MPP.Common.BaseDataProvider.CreateCommand(String commandText, CommandType commandType, IDataParameter[] commandParams)
at Aginity.MPP.Common.BaseDataProvider.ExecuteReader(String commandText, CommandType commandType, IDataParameter[] commandParams)
--- Inner Exception: ---
......
It seems there is not enough information going into Aginity to authorize connection to my cluster - no account credential are supplied. For UserID, am I meant to enter the ID of a valid user? Can I use the root account? What would the ID look like? I have setup a User with FullAccess to S3 and Redshift, then entered the UserID in this format
arn:aws:iam::600123456789:user/john
along with the matching password, but that hasn't worked either.
The only training/tutorial I have been able to find/do on this is the Intro AWS direct you to, at https://qwiklabs.com/focuses/2366, which uses a web-based client that I can't find outside of the tutorial (pgweb).
Any advice what I am doing wrong, and how to do it right?
Well, I think I got it working - I haven't had a chance to see if I can actually create table yet, but it seems to be connected. I had to allow inbound traffic from outside the VPC, as per the above snapshot.
I'm guessing there's a better way than opening it up to all IP addresses, but I don't know the users' (fellow team members) IPs, and aren't they all subject to change depending on the device they're using to connect?
How does one go about getting inside the VPC to connect that way, presumably more securely?
I created a new application in a new (empty) space and when I try to deploy it, it fails with this error:
Server error, status code: 400, error code: 210003, message: The host
is taken: IoT-DEMO-POC
You need to use a different route because the host is taken. See http://iot-demo-poc.mybluemix.net/
You can get a random route if you run the push command with --random-route
cf push --random-route
You can also open the manifest.yml file and change the host parameter if there is one or the application name. The route needs to be unique.
I am trying to expose a WCF based restful service via http and am so far unsuccessful. I'm trying on my local machine first so prove it works. I found a suggestion here that I remove my local cluster and then manually run this powershell command from the SF SDK folder as administrator to recreate it with the machine name binding: .\DevClusterSetup.ps1 -UseMachineName
It created the cluster successfully. I can use the SF Explorer and see in the cluster manifest that entries in the NodeList show the machine name rather than localhost. This seems good.
But the first problem I notice is that if I expand my way through SF Explorer down to the node my app is running on I see an endpoints entry but the URL is not what I'd expect. I am seeing this: http://surfacelap/d5be9425-3247-4290-b77f-1a90f728fb8d/39cda0f7-cef4-4c7f-8af2-d05786a834b0-131111019607641260
Is that what I should see even though I have an endpoint setup? I did not expect the guid and other numbers in the path. This makes me suspect that SF is not seeing my service as being publicly accessible and instead is maybe only setup for internal access within the app? If I dig down into my service manifest I see this as expected:
<Resources>
<Endpoints>
<Endpoint Name="ResolverEndpoint" Protocol="http" Type="Input" Port="80" />
</Endpoints>
</Resources>
But how do I know if the service itself is mapped to it? When I use the crazy long url above and try a simple method of my service I get an http 202 response and no response data as expected. If I then change the method name to one that doesn't exist I get the same thing, not the expected http 404. I've tried using both my machine name and localhost. Same result.
So clearly I'm doing something wrong. Below is my CreateServiceInstanceListeners override. In it you can see I use "ResolverEndpoint" as my endpoint resource name, which matches the service manifest:
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] { new ServiceInstanceListener((context) =>
new WcfCommunicationListener<IResolverV2>(
serviceContext: context,
wcfServiceObject: new ResolverServiceV2(),
listenerBinding: new WebHttpBinding(WebHttpSecurityMode.None),
endpointResourceName: "ResolverEndpoint"
)
)};
}
What am I doing wrong here?
Here's a way to get it to work: https://github.com/loekd/ServiceFabric.WcfCalc
Essential changes to your code are the use of the public name of your cluster as endpoint URL and an additional WebHttpBehavior behavior on that endpoint.
The endpoint resources specified in the service manifest is shared by all of the replica listeners in the process that use the same endpoint resource name. So if your service has more than one partition it is possible that more than one replica from different partition may end up in the same process. In order to differentiate the messages addressed to different partitions, the listener adds partition ID and additional instance GUID to the path.
If you are going to have a singleton partition service and know that there will not be more than one replicas in the same process you can directly supply EndpointAddress you want to the listener to open at. Use the CodePackageActivationContext API to get the port from the endpoint resource name, node name or IP address from the NodeContext and then provide the path you want the listener to open at.
Here is the code in the WcfCommunicationListener that constructs the Listen Address.
private static Uri GetListenAddress(
ServiceContext serviceContext,
string scheme,
int port)
{
return new Uri(
string.Format(
CultureInfo.InvariantCulture,
"{0}://{1}:{2}/{5}/{3}-{4}",
scheme,
serviceContext.NodeContext.IPAddressOrFQDN,
port,
serviceContext.PartitionId,
serviceContext.ReplicaOrInstanceId,
Guid.NewGuid()));
}
Please note that you can now have only once application, one service and one partition on a node and when you are testing locally, keep the instance count of that service as 1. When deployed in the actual cluster you can use -1 instance count.
We are seeing ProtocolExceptions while communicating with a service running in the cluster. The message and InnerException message:
System.ServiceModel.ProtocolException: You have tried to create a channel to a service that does not support .Net Framing.
---> System.IO.InvalidDataException: Expected record type 'PreambleAck', found '145'.
This service is running on a local dev cluster, and the exception is thrown after communicating successfully with the service.
The code that we use for communicating is:
var eventHandlerServiceClient = ServiceProxy.Create<IEventHandlerService>(eventHandlerTypeName, new Uri(ServiceFabricSettings.EventHandlerServiceName));
return await eventHandlerServiceClient.GetQueueLength();
We have retry logic (with increasing delay's between the attempts). But this call never succeeds. So it looks like the service is in a fault state and cannot recover from it.
Update
We are also seeing the following errors in the logs:
connection 0x1B6F9EB0 localhost:64002-[::1]:50376 target 0x1B64F3C0: invalid frame: length=0x1000100,type=514,header=28278,check=0x742E7465
Update 14-12-2015
If this ProtocolException is thrown, retries don't help. Even after hours of waiting, it still fails.
We log the endpoint address with
var spr = ServicePartitionResolver.GetDefault();
var x = await spr.ResolveAsync(new Uri(ServiceFabricSettings.EventHandlerServiceName),
eventHandlerTypeName,
new CancellationToken());
var endpointAddress = x.GetEndpoint().Address;
The resolved endpoint looks like
{"Endpoints":{"":"net.tcp:\/\/localhost:57999\/d6782e21-87c0-40d1-a505-ec6f64d586db\/a00e6931-aee6-4c6d-868a-f8003864a216-130945476153695343"}}
This endpoint is the same as reported by the Service Fabric Explorer.
From our logs seen, it seems that this service is working (it is reachable via another API method), but this specific call never succeeds.
This typically indicate mismatched communication stack on the service and client side. Once the service is up and running, check the endpoint of the service replica via Service Fabric Explorer. If that seems fine, check that the client is connecting to the right service. Resolve the partition using the ServicePartitionResolver (https://msdn.microsoft.com/en-us/library/azure/microsoft.servicefabric.services.servicepartitionresolver.aspx), passing the same arguments that you pass to ServiceProxy.
I'm seeing the same sort of errors. Just looking at my code, I'm caching an actorproxy. I'm going to change that and remove the caching in case the cache is referencing an old instance of the service.
That appears to have fixed my issues. I'm guessing that the proxy caches the reference once it has been used and if the service changes, that reference is out of date.