Using JBoss 7's jboss-cli I can query the deployed applications:
[standalone#localhost:9999 /] deployment-info --headers=
NAME RUNTIME-NAME PERSISTENT ENABLED STATUS
jboss-ejb-in-ear.ear jboss-ejb-in-ear.ear true true OK
singleton_in_war.war singleton_in_war.war true true OK
Programatically I can query any CLI query starting with /, for example this:
/path=jboss.server.log.dir:read-attribute(name=path)
where the address is
/path=jboss.server.log.dir
and the operation is
read-attribute(name=path)
My question is, for the CLI query
deployment-info --headers=
what is the address and what is the operation?
Best regards,
SK
I've found this solution useful for querying the deployed applications in standalone mode by using CLI api.
The CLI query is:
/deployment=*:read-attribute(name=name)
where the address "/deployment=*" will target all the deployments.
And basically requests the name attribute for all deployments in current server.
Finally this snippet shows the code for executing the query by using the model controller api:
ModelControllerClient client = "...create the controller client";
ModelNode operation = new ModelNode( );
operation.get( "address" ).add( "deployment", "*" );
operation.get( "operation" ).set( "read-attribute" );
operation.get( "name" ).set( "name" );
ModelNode result = client.execute( operation );
List<ModelNode> deployments = result.get( "result" ).asList();
String deploymentName;
// finally we can iterate and get the deployment names.
for ( ModelNode deployment : deployments ) {
deploymentName = deployment.get( "result" ).asString();
System.out.println( "deploymentName = " + deploymentName );
}
Works for both WF10 and EAP7
Did you try this command?
/server-group=*/deployment=*/:read-resource(recursive=false,proxies=true,include-runtime=true,include-defaults=true)
You can navigate the model nodes and get the details you needed.
The deployment-info command only has the options --name and --headers. Using the command deployment-info --name=singleton_in_war.war you can narrow the infomation to this deployment only.
The --help option shows you the online help for deployment-info:
[standalone#localhost:9999 /] deployment-info --help
SYNOPSIS
Standalone mode:
deployment-info [--name=wildcard_expression]
[--headers={operation_header (;operation_header)*}]
Domain mode:
deployment-info --name=deployment_name |
--server-group=server_group [--name=wildcard_expression]
[--headers={operation_header (;operation_header)*}]
DESCRIPTION
Displays information about single or multiple deployments.
In the standalone mode the --name argument is optional.
If it's absent, the command will display information about all the
registered deployments. Otherwise, the value of the --name is either a
specific deployment name or a wildcard expression.
...
Enter:
deployment-info --name=
and then press tab. It will autocomplete all deployments.
Related
I try to set Docker containers as node with the following Custom Mapping :
hostname.selector=docker:IPAddress
node.name.selector=docker:Name
username.selector=root
osFamily.selector=Docker
ssh-authentication=password
ssh-password-storage-path=keys/${node.hostname}/${node.username}
node.ssh-authentication.selector=password
docker-shell.default=bash
I alway get this error message :
Failed: AuthenticationFailure: Authentication failure connecting to node: "xxxxxx". Make sure your resource definitions and credentials are up to date.
Set the Docker node executor. Project Settings > Edit Configuration > Default Node Executor tab (select "docker-container-node-executor") and save it.
I need to create many MongoDB Atlas endpoint connections using terraform.
I successfully create first, using this code:
#Private endpoint connection
resource "mongodbatlas_private_endpoint" "dbpe" {
project_id = var.prj_id
provider_name = "AWS"
region = var.aws_region
}
#AWS endpoint for secure connect to mongo db
resource "aws_vpc_endpoint" "ec2" {
vpc_id = var.sh_vpc
#service_name = "com.amazonaws.${var.aws_region}.ec2"
service_name = mongodbatlas_private_endpoint.dbpe.endpoint_service_name
vpc_endpoint_type = "Interface"
security_group_ids = [
aws_security_group.lb_sg.id,
]
subnet_ids = [
aws_subnet.subnet1.id,
var.sh_subnet
]
tags = {
"Name" = local.tname
}
#private_dns_enabled = true
}
But when I try to use this code second time in another folder (another tfstate) it failed cause error:
Error: error creating MongoDB Private Endpoints Connection: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/***/privateEndpoint: 409 (request "Conflict") A PrivateLink Endpoint Service already exists for AWS region US_EAST_2.
As I understand, a second "mongodbatlas_private_endpoint" "dbpe" trying to create another one Endpoint service. But, when I creating second Endpoint manually through WebUI, it using the same service like first Endpoint.
How I can tell to second Endpoint to use the existing service?
Or maybe it all wrong?
Please, help!
Thank you!
I found the solution.
Creating the "Endpoint Connection" really creates Endpoint only when you do it at first time. All of next times is creating an only association between Atlas endpoint and new AWS Endpoint.
In terraform I tried to create an Atlas endpoint second time and catch an error (because of limit - 1 endpoint per region). All I need to do - is create "Basic Endpoint" one time (by separate folder with own tfstate) and don't delete it. And for each new AWS endpoint need to create a new link from AWS Endpoint to "Basic". I do it by a terraform resource:
mongodbatlas_private_endpoint_interface_link
Resource "mongodbatlas_private_endpoint" is not need now. A "service_name" parameter in "aws_vpc_endpoint" you can hardcoded from "Basic" Endpoint. Use "output" to see mongodbatlas_private_endpoint.test.private_link_id - this is what you need.
I am having a problem with configuring Splunk to send logs on ECS Cluster.
From the event tab in service, this error was there
Problem Statement: unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxx is missing an attribute required by your task.
after doing a deep drive I found have to update /etc/ecs/ecs.config. and entry echo ECS_AVAILABLE_LOGGING_DRIVERS='["splunk","awslogs"]'.
But this couldn't help?
still getting the same error.
Can anyone please help?
If you are looking to send container logs to Splunk. you need to have logConfiguration tag in task definition json with all required details like below
{
"log-driver": "splunk",
"log-opts": {
"splunk-token": "",
"splunk-url": "",
...
}
}
AWS task definition parameters
splunk logging options
When I use AWS EC2 driver invoke create_node and ex_modify_instance_attribute API , I got this error:
raise InvalidCredsError(err_list[-1])
libcloud.common.types.InvalidCredsError: 'AuthFailure: AWS was not able to validate the provided access credentials'
But ex_create_subnet/ list_nodes API success , and I'm sure about I have the permission on AWS IAM to create EC2 instance.
By the way , I am using AWC cn-north-1 region.
I find create node with some parameters will got AuthFailure
The Code:
node = self.conn.create_node(name=instance_name,
image=image,
size=size,
ex_keyname=ex_keyname,
ex_iamprofile=ex_iamprofile,
ex_subnet=ex_subnet,
ex_security_group_ids=ex_security_group_ids,
ex_mincount=ex_mincount,
ex_maxcount=ex_mincount,
ex_blockdevicemappings=config['block_devices'],
ex_assign_public_ip=config['eth0']['need_eip']
)
I just delete some parameters and works:
node = self.conn.create_node(name=instance_name,
image=image,
size=size,
ex_keyname=ex_keyname,
# ex_iamprofile=ex_iamprofile,
ex_subnet=ex_subnet,
# ex_security_group_ids=ex_security_group_ids,
ex_mincount=ex_mincount,
ex_maxcount=ex_mincount,
# ex_blockdevicemappings=config['block_devices'],
# ex_assign_public_ip=config['eth0']['need_eip']
)
I am very new with Cloud foundry. I have added cloud foundry for google compute engine platform by this guides source1 and source2.
Terraform was used for creating needed infrastructure. It seemed all was fine I didn't get any errors during deployment cloud foundry itself and bosh cck command returns that there are no any problems. But when I tried to deploy my hello world app, I got next error message in terminal after cf push command:
Creating container
Failed to create container
FAILED
Error restarting application: StagingError.
After checking log files I found next message:
{
"timestamp":"1474637304.026303530",
"source":"garden-linux",
"message":"garden-linux.loop-mounter.mount-file.mounting",
"log_level":2,
"data":{
"destPath":"/var/vcap/data/garden/aufs_graph/aufs/diff/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"error":"exit status 32",
"filePath":"/var/vcap/data/garden/aufs_graph/backing_stores/08829a3252c1d60729e3b5482b0fb109652c9ab5beff9724e4e4ae756a0bc3ce",
"output":"mount: wrong fs type, bad option, bad superblock on /dev/loop0,\n missing codepage or helper program, or other error\n In some cases useful info is found in syslog - try\n dmesg | tail or so\n\n",
"session":"2.276"
}
}{
"timestamp":"1474637304.026949406",
"source":"garden-linux",
"message":"garden-linux.pool.acquire.provide-rootfs-failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"9.545"
}
}
{
"timestamp":"1474637304.027062416",
"source":"garden-linux",
"message":"garden-linux.garden-server.create.failed",
"log_level":2,
"data":{
"error":"mounting file: mounting file: exit status 32",
"request":{
"Handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"GraceTime":0,
"RootFSPath":"/var/vcap/packages/rootfs_cflinuxfs2/rootfs",
"BindMounts":[
{
"src_path":"/var/vcap/data/executor_cache/6942123d3462ad9d21a45729c3cae183-1474475979582384649-1.d",
"dst_path":"/tmp/lifecycle"
}
],
"Network":"",
"Privileged":true,
"Limits":{
"bandwidth_limits":{
},
"cpu_limits":{
"limit_in_shares":512
},
"disk_limits":{
"inode_hard":200000,
"byte_hard":6442450944,
"scope":1
},
"memory_limits":{
"limit_in_bytes":1073741824
}
}
},
"session":"11.44187"
}
}{
"timestamp":"1474637304.034646988",
"source":"garden-linux",
"message":"garden-linux.garden-server.destroy.failed",
"log_level":2,
"data":{
"error":"unknown handle: ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"handle":"ec6e7469-0ef0-48a8-bcd0-82f4a2ea173f-5de2e641d9284aeea209ca447ffffb6d",
"session":"11.44188"
}
}
And meantime in dmesg | tail I got next:
[161023.238082] aufs test_add:283:garden-linux[7681]: uid/gid/perm
/var/vcap/data/garden/aufs_graph/aufs/diff/d350dcd30f6d6f8b37eabe06a3b73bcea0a87f9aff4edf15f12792269fc9f97c
4294967294/4294967294/0755, 0/0/0755 [161023.238109] aufs
au_opts_verify:1597:garden-linux[7681]: dirperm1 breaks the protection
by the permission bits on the lower branch [161023.413392] device
wtj3qdqhig0t-0 entered promiscuous mode
I'm not sure that this issues connected or that it is issue at all, but I post them here in order to be sure, that I didn't miss anything.
I don't know how to fix this problem and where, should I look solution for terraform scripts or for bosh manifest files. We have micro service architecture with three nodes on node js and one on ruby, so deployment is very important question for us.
here is my application manifest.yml file:
---
applications:
- name: hello_cloud
memory: 128M
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
instances: 1
random-route: true
command: "node server.js"
My goal is to be able deploy applications using cloud foundry. If you have any additional questions or I wrote something unclear feel free to write me.
This issue is related a conflict between garden and the 4.4 Linux kernel. To use the example cloudfoundry manfest, use the follow stemcell:
bosh upload stemcell https://bosh.io/d/stemcells/bosh-google-kvm-ubuntu-trusty-go_agent?v=3262.19
bosh deploy
You may need to delete your cf deployment before re-deploying due to quota issues.