How to artemis queue from command line? - activemq-artemis

Is there any way to purger artemis queues? I have already purged then by going cd data/paging. This is the location where I have installed my artemis broker.
There is a UI called haw.io of artemis , though I Have deleted all the files in the paging directory, it sill shows the message on the UI, which in the correct case should not be there.
Please suggest.

From command line in your broker instance bin folder:
artemis queue delete --user user --password password --name queue-name

Artemis Broker provides a REST management API that users can use to read and change many of the broker's parameters in run time. Therefore, it's possible to purge a queue from command line using a command line like this:
curl -X POST -H "Content-Type: application/json" -d '{ "type": "EXEC", "mbean": "org.apache.activemq.artemis:address=\"test.performance.queue\",broker=\"0.0.0.0\",component=addresses,queue=\"test.performance.queue\",routing-type=\"anycast\",subcomponent=queues", "operation": "removeMessages(java.lang.String)", "arguments": [ "" ] }' http://localhost:8161/jolokia/exec | jq .
In this example above, I am purging the contents of a queue named test.performance.queue on a broker instance 0.0.0.0. These parameters need to be adjusted for the specific case.
Obs: note that I used jq . simply to make the response JSON prettier (you don't need to do that if you don't care about the response):
{
"request": {
"mbean": "org.apache.activemq.artemis:address=\"test.performance.queue\",broker=\"0.0.0.0\",component=addresses,queue=\"test.performance.queue\",routing-type=\"anycast\",subcomponent=queues",
"arguments": [
""
],
"type": "exec",
"operation": "removeMessages(java.lang.String)"
},
"value": 13001,
"timestamp": 1503740691,
"status": 200
}
Another possibility, might be to use the BMIC tool, which provides access to several APIs used for managing ActiveMQ 6 and Artemis brokers (disclaimer: I am the maintainer of the tool). Using that, you can do the same thing using this command:
./bmic queue -u admin -p admin -s localhost --name test.performance.queue --purge
One benefit of the tool over the curl command is that you don't need to care about the broker parameters, as the tool will (try to) do the discovery for you.

There are lots of ways to manage an instance of Apache ActiveMQ Artemis. For example, you can use:
JMX via a GUI tool like JConsole or JVisualVM
Web-based console
REST via Jolokia
Management messages (e.g. via core, JMS, AMQP, etc.)
However, you cannot simply delete files out from underneath the broker.

Related

VMWare REST Api - access VM host details via REST API

I'm using the VMWare REST API (/api/vcenter/host) to query information about the VM Hosts registered on a vCenter. At the moment I can only seem to get basic info like this :
{
"host": "host-10",
"name": "192.168.18.89",
"connection_state": "CONNECTED",
"power_state": "POWERED_ON"
}
but the Powershell "Get-VMHost | Format-List" has much more useful information such as ESXi version, hardware specs etc.
Can I get this kind of information via the REST API as well?
Thanks!
Yes you can get that kind of information, depending on exactly what info you are wanting. The REST API references are here that you can use to look up what you need. https://developer.vmware.com/apis/vsphere-automation/latest/
For example if you want to know what software is installed you can do something like this.
export basepw=$(echo -n 'administrator#vsphere.local:{password}' | base64)
export token3=$(curl -k -X POST -H "Authorization: Basic ${basepw}" https://{vCenterIP}/api/session/ | tr -d '"')
curl -k -X GET -H "vmware-api-session-id: ${token}" "https://{vCenterIP}/api/esx/software" -H "Content-Type: application/json" -d '{ "auth_type": "EXISTING", "host": "{host-##}"}' | jq .
It looks like vSphere 8 has more options that might fit what you want, like extracting the config https://developer.vmware.com/apis/vsphere-automation/latest/esx/settings/hosts.configuration/
From the looks, some other SDKs would be more developed that the REST API current state. Personally I like govmomi and pyvmomi and both have a CLI tool that can get you started pretty fast. The cli tool for govmomi, govc, doesn't require anything extra to run, so it is fairly portable and might help you with what you are doing.
https://github.com/vmware/govmomi
https://github.com/vmware/pyvmomi

Trying to debug a spring dataflow stream deployed in kubernetes

I have succesfully deployed a stream using spring dataflow in eks, but I need to debug an application of the stream.
I have set up spring.cloud.deployer.kubernetes.environment-variables: JAVA_TOOL_OPTIONS='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000' in the application I want to debug, and the application starts an it is listening on that port.
Is there any property to tell kubernetes to map this port and make it accessible?
Thank you.
Try this:
And then try a kubectl port-forward
service/YOUR_SERVICE_NAME Host port:Service port
The documentation is really complete btw, there's a lot of information here:
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/
Thanks #bguess for point me into the rigth direcction.
Finally this is what I have done:
When we are going to deploy With web interface we click edit button of the application we want to debug:
add JAVA_TOOL_OPTIONS='-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000' into environment-variables.
In my case I have kubernetes on aws and is deployed in private mode and the only way I have found in this moment is to create a LoadBalancer for the application. I know it is insecure but itś enough for my needs.
Finally as #bguess pointed we have to add our debug port to serverPorts, this property isn't in the list when we psuh edit button in the application so we have to write it:
So this is the way to configure with the web interface.
If we want to use a terminal in linux or similar we can do this steps:
definition="app-source | app-process | app-sink"
curl "$scdf_url/streams/definitions" --write-out '%{http_code}' --silent --output /dev/null -X POST -d "name=poc-stream&definition=$definition&deploy=false"
Where definition is our stream definition and scdf_url is the spring cloud dataflow server url. After the curl call we will have our stream created but undeployed, To deploy with the debug configuration:
properties="$(cat << EOF
{
"deployer.app-source.kubernetes.environment-variables":
"JAVA_TOOL_OPTIONS=’-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8000’",
"deployer.app-source.kubernetes.servicePorts":"8000",
"deployer.app-source.kubernetes.create-load-balancer":"true"
}
EOF
)"
curl "$scdf_url/streams/deployments/poc-stream" --write-out '%{http_code}' --silent --output /dev/null -X POST -H "Content-Type: application/json" -d "$properties"
And that's the way I have configured it.
Additionally you will have to increase inactivity time of the LoadBalancer because his value is 60s and after that time it will disconnects you.

Kafka mongo db source connector not working

Hi in my POC I am using both the sink and the source mongodb connector.
The sink connector works fine. But the source connector does not push data into the resultant topic. The objective is to push full documents of all changes (Insert and Update) in a collection call 'request'.
Below is the code.
curl -X PUT http://localhost:8083/connectors/source-mongodb-request/config -H "Content-Type: application/json" -d '{
"tasks.max":1,
"connector.class":"com.mongodb.kafka.connect.MongoSourceConnector",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter":"org.apache.kafka.connect.storage.StringConverter",
"connection.uri":"mongodb://localhost:27017",
"pipeline":"[]",
"database":"proj",
"publish.full.document.only":"true",
"collection":"request",
"topic.prefix": ""
}'
No messages are getting pushed to proj.request topic. The topic gets created once I insert a record in the collection 'request'.
Would be great to get help on this, as its a make or break task for the POC.
Things work fine n the connectors on confluent cloud. But its the on premise set up on which I need to get this working.
make sure you have a valid pipeline - stages included in your properties file such as this one
"pipeline":" [{"$match":{"type":{"$in"["insert","update","replace"]}}}]",
Refer : https://docs.mongodb.com/manual/reference/operator/aggregation-pipeline/

Create ActiveMQ Artemis broker with command line only

I'm trying to create an ActiveMQ Artemis broker instance using the command-line only but it seems that allow-anonymous option is ignored and the question "Allow anonymous?" comes anyway after I run the create command like this:
./artemis-2.17.0/bin/artemis create --user=test --password=test --allow-anonymous=Y ./broker-name
What is the right way to pass the allow-anonymous option and avoid to get that question?
If you run this command you will see all the available options for the create command:
artemis help create
One of these options is --allow-anonymous. This doesn't need to be set to any value. Also the options which do take a value do not need the equal sign (=). Therefore, your command should look like this:
artemis create --user test --password test --allow-anonymous ./broker-name

How to pass API parameters to GCP cloud build triggers

I have a large set of GCP Cloud Build Triggers that I invoke via a Cloud scheduler, all running fine.
Now I want to invoke these triggers by an external API call and pass them dynamic parameters that vary in values and number of parameters.
I was able to start a trigger by running an API request but any JSON parameters in the API request that I sent were ignored.
Google talks about substitution parameters at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values. I define these variables in the cloudbuild.yaml file, however they were not propagated into my shell script from the API request.
I don't any errors with authentication or authorization, so security may not be an issue.
Is my idea supported at all or do I need to resort to another solution such as running a GKE cluster with containers that would expose its API (a very heavy-boxing solution).
We do something similar -- we migrated from Jenkins to GCB but for some people we still need a nicer "UI" to start builds / pass variables.
I got scripts from here and modified them to our own needs: https://medium.com/#nieldw/put-your-build-triggers-into-source-control-with-the-cloud-build-api-ed0c18d6fcac
Here is their REST API: https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.triggers/run
For the script below, keep in mind you need the trigger-id of what you want to run. (you can also get this by parsing the output of another REST API.)
TRIGGER_ID=1
# we need to specify ATLEAST the branch name or commit id (check after)
BRANCH_OR_SHA=$2
# check if branch_name or commit_sha
if [[ $BRANCH_OR_SHA =~ [0-9a-f]{5,40} ]]; then
# is COMMIT_HASH
COMMIT_SHA=$BRANCH_OR_SHA
BRANCH_OR_SHA="\"commitSha\": \"$COMMIT_SHA\""
else
# is BRANCH_NAME
BRANCH_OR_SHA="\"branchName\": \"$BRANCH_OR_SHA\""
fi
# This is the request we send to google so it knows what to build
# Here we're overriding some variables that we have already set in the default 'cloudbuild.yaml' file of the repo
cat <<EOF > request.json
{
"projectId": "$PROJECT_ID",
$BRANCH_OR_SHA,
"substitutions": {
"_MY_VAR_1": "my_value",
"_MY_VAR_2": "my_value_2"
}
}
EOF
# our curl post, we send 'request.json' with info, add our Token, and set the trigger_id
curl -X POST -T request.json -H "Authorization: Bearer $(gcloud config config-helper \
--format='value(credential.access_token)')" \
https://cloudbuild.googleapis.com/v1/projects/"$PROJECT_ID"/triggers/"$TRIGGER_ID":run