ActiveMQ Artemis how to filter message by part of the "text" field - activemq-artemis

Is there any way to filter messages in ActiveMQ Artemis 2.10.0 by part of the "text" field using the management console?
I use method "browse(java.lang.String)" and try to filter my message (example below) by this expression:
text LIKE '%777-555-333-111%'
Message example:
{
"address": "ADDRESS.EXAMPLE",
"ShortProperties": {},
"messageID": "11111",
"priority": 4,
"type": 3,
"redelivered": false,
"ByteProperties": {
"_AMQ_ROUTING_TYPE": 1
},
"IntProperties": {
"CamelHttpResponseCode": 200
},
"durable": true,
"StringProperties": {
"Server": "nginx\/1.19.5",
"CamelHttpCharacterEncoding": "UTF-8",
"Content_HYPHEN_Type": "application\/xop+xml",
"connection": "keep-alive"
},
"DoubleProperties": {},
"expiration": 0,
"text": "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?><processId>777-555-333-111<\/processId><\/error>",
"BooleanProperties": {},
"FloatProperties": {}
}
However, it doesn't give me any results.
Would be grateful for a hint if it possible on my current Artemis version.

The filter used by the browse management operation (as well as that used by JMS consumers, etc.) only applies to message headers and properties. You can't filter a message by the text in its body.
The data that you pasted is just the serialized message data sent to the client after the filter has already been applied.

Apache ActiveMQ Artemis also supports special XPath filters which operate on
the body of a message. The body must be XML, see the documentation for further details.
To use an XPath filter use this syntax:
XPATH '<xpath-expression>'

Related

Batch create transcription always results in: The recordings URI contains invalid data

I would like to use Azure Speech Services Batch Transcription APIs to create a transcription of my audio file. I've already had success using the Speech Service SDK (for Node.js), but was interested in trying out one of the newer features available in v3.1 preview version of the api (displayFormWordLevelTimestampsEnabled), so I figured I had to do use the REST API service to do that.
Overall my problem is that for whatever input I've feed the Create Transcript API for contentUrls, I always end up getting the same error:
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
After a little digging, I found some tips through the Azure portal to use sox to handle transcoding the audio file in the specific format requested.
The specific format they mention in the portal documentation shows:
If you are using REST API, make sure that it uses one of the formats in this table:
Format
Codec
Bit rate
Sample Rate
WAV
PCM
256 kbps
16 kHz, mono
OGG
OPUS
256 kpbs
16 kHz, mono
With the sox specific commands being:
Activity
SoX command
Check the audio file format.
sox --i
Convert the audio file to single channel, 16-bit, 16 KHz.
sox -b 16 -e signed-integer -c 1 -r 16k -t wav .wav
I ran my mp3 through the second command and verified the file with the first, and the contents of the file looks like:
Input File : 'out5.wav'
Channels : 1
Sample Rate : 16000
Precision : 16-bit
Duration : 00:00:30.09 = 481488 samples ~ 2256.97 CDDA sectors
File Size : 963k
Bit Rate : 256k
Sample Encoding: 16-bit Signed Integer PCM
Finally, I uploaded the file to a public S3 bucket, to use as my content url for my request:
POST https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions
{
"contentUrls": [
"https://s3.us-west-1.amazonaws.com/xxxx/out5.wav"
],
"locale": "en-US",
"displayName": "Test"
}
Still it failed with the same error that I posted above. Any insights into what might be wrong? Thanks!
Update:
The answer below mentioned being able to reference a reports.json file on the Get Transcript/Create Transcript api call.
When I use the Create Transcript API my payload is:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked"
},
"lastActionDateTime": "2022-09-13T23:37:09Z",
"status": "NotStarted",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
Calling the Get Transcript I see:
{
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95",
"model": {
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/models/base/c3b008fa-eb47-4f6d-a5b9-71dd37870bb7"
},
"links": {
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.1-preview.1/transcriptions/02815462-e9c0-4fdc-8bbe-7b0e78152f95/files"
},
"properties": {
"diarizationEnabled": false,
"wordLevelTimestampsEnabled": false,
"displayFormWordLevelTimestampsEnabled": false,
"channels": [
0,
1
],
"punctuationMode": "DictatedAndAutomatic",
"profanityFilterMode": "Masked",
"error": {
"code": "InvalidData",
"message": "The recordings URI contains invalid data."
}
},
"lastActionDateTime": "2022-09-13T23:37:22Z",
"status": "Failed",
"createdDateTime": "2022-09-13T23:37:09Z",
"locale": "en-US",
"displayName": "Test"
}
And finally looking at the transcript files I'm getting an empty list:
{
"values": []
}
I see no reference to a reports.json, or any data populated here at all.
In many cases you can get a detailed error information by doing a GET on https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/<transcription_id>/files and looking at the report.json that is referenced there.
If that doesn't help, you could post transcription id(s) of failed transcription so someone from the team (I am one of them) can look at the service logs.

what does the `port` mean in kafka zookeeper path `/brokers/ids/$id`

I got two kafka listeners with config
listeners=PUBLIC_SASL://0.0.0.0:5011,PUBLIC_PLAIN://0.0.0.0:5010
advertised.listeners=PUBLIC_SASL://192.168.181.2:5011,PUBLIC_PLAIN://192.168.181.2:5010
listener.security.protocol.map=PUBLIC_SASL:SASL_PLAINTEXT,PUBLIC_PLAIN:PLAINTEXT
inter.broker.listener.name=PUBLIC_SASL
5010 is plaintext, 5011 is sasl_plaintext.
After startup, I found this information in zookeeper(/brokers/ids/$id):
{
"listener_security_protocol_map": {
"PUBLIC_SASL": "SASL_PLAINTEXT",
"PUBLIC_PLAIN": "PLAINTEXT"
},
"endpoints": [
"PUBLIC_SASL://192.168.181.2:5011",
"PUBLIC_PLAIN://192.168.181.2:5010"
],
"jmx_port": -1,
"features": { },
"host": "192.168.181.2",
"timestamp": "1658485899402",
"port": 5010,
"version": 5
}
What does the port filed mean? Why the port is 5010? Could I change it to 5011?
What you're seeing are advertised.port and advertised.host Kafka settings, which may be parsed from the advertised.listener list for backward compatibility, but both of these are deprecated, however, and the Kafka protocol now uses the protocol map and corresponding endpoints list, instead.

PropertyParams when deploying VM from OVF

I am using the VMWare vCenter REST API to deploy new Virtual Machines from OVF library items. Part of the API allows for additional_paramaters but I am unable to get it to function properly. Specifically, I would like to set the PropertyParams for custom OVF template properties.
When deploying VM from OVF, I am using the following REST API:
POST https://{server}/rest/com/vmware/vcenter/ovf/library-item/id:{ovf_library_item_id}?~action=deploy
I have tried many structures and either end up with the POST succeeding but the parameters completely ignored, or with a 500 Internal Server error with a message about failing to convert the properties structure:
Could not convert field 'properties' of structure 'com.vmware.vcenter.ovf.property_params'
The payload that seems correct from the documentation (but fails with the error above):
deployment_spec : {
/* ... */
additional_parameters : [
{
type : 'PropertyParams',
properties : [
{
id : 'my_property_name',
value : 'foo',
}
]
}
]
}
Given an OVF that contains the following:
<ProductSection>
<Info>Information about the installed software</Info>
<Product>MyProduct</Product>
<Vendor>MyCompany</Vendor>
<Version>1.0</Version>
<Category>Config</Category>
<Property ovf:userConfigurable="true" ovf:type="string" ovf:key="my_property_name" ovf:value="">
<Label>My Property</Label>
<Description>A custom property</Description>
</Property>
</ProductSection>
This also fails for other property types such as boolean.
Note that I have posted on the vCenter forums as well.
I had the same issue, i success to solve it by browsing the vapi structure /com/vmware/vapi/metadata/metamodel/structure/id:<idstructure>
Here is my finding :
firstly, get your properties structure by using the filter api :
https://{{vc}}/rest/com/vmware/vcenter/ovf/library-item/id:300401a5-4561-4c3d-ac67-67bc7a1a6
Then, to deploy, use the class com.vmware.vcenter.ovh.property_params. It will be more clear with the exemple :
{
"deployment_spec": {
"accept_all_EULA": true,
"name": "clientok",
"default_datastore_id": "datastore-10",
"additional_parameters": [
{
"#class": "com.vmware.vcenter.ovf.property_params",
"properties":
[
{
"instance_id": "",
"class_id": "",
"description": "The gateway IP for this virtual appliance.",
"id": "gateway",
"label": "Default Gateway Address",
"category": "LAN",
"type": "ip",
"value": "10.1.2.1",
"ui_optional": true
}
],
"type": "PropertyParams"
}
]
}

Sample messages from IOT sensors for MQTT communications

There is an M2M Application which wants to talk to the temperature sensors on the field, i.e. send/receive messages using MQTT pub/sub protocol.
I have setup both IOTDM as well as one with eclipse OneM2M using Mosquito. But, I am looking for some sample APIs/commands through which a M2M application can send a message to the MQTT client and vice versa.
Or if any of you could point me to the appropriate call flows that would be helpful.
Any help would be highly appreciated.
Here is a GET MQTT message example:
topic: /oneM2M/req/{{origin}}/{{cse-id}}/json
message:
{
"m2m:rqp": {
"op": "2",
"to": "{{resource_uri}}",
"fr": "{{origin}}",
"rqi": 12345,
"pc": ""
}
}
{{resource_uri}} is the relative path of a resource existing on the
oneM2M server (e.g. /my_cse_base/my_ae)
{{origin}} is the origin enabled (by ACP) to retrieve the resource
{{cse-id}} is the CSEbase ID
The message received could be similar to:
topic: /oneM2M/resp/{{origin}}/{{cse-id}}/json
message:
{
"m2m:rsp": {
"rsc": 2000,
"rqi": 12345,
"pc": {
"m2m:ae": {
"pi": "Sy2XMSpbb",
"ty": 2,
"ct": "20170706T085259",
"ri": "r1NX_cOiVZ",
"rn": "my_ae",
"lt": "20170706T085259",
"et": "20270706T085259",
"acpi": ["/my_cse_base/acp_my_ae"],
"aei": "my_ae_id",
"rr": true
}
}
}
}
A POST example:
topic: /oneM2M/req/{{origin}}/{{cse-id}}/json
message:
{
"m2m:rqp": {
"op": "1",
"to": "{{resource_uri}}",
"fr": "{{origin}}",
"rqi": 12345,
"ty": "4",
"pc": {
"m2m:cin": {
"cnf": "text/plain:0",
"con": "123",
"lbl": ["test"]
}
}
}
}
{{resource_uri}} is the relative path of a resource existing on the
oneM2M server (e.g. /my_cse_base/my_ae)
{{origin}} is the origin enabled (by ACP) to create a new resource
{{cse-id}} is the CSEbase ID
For an JS speach i made an app for mesure the soil moisture. I used MQTT for send information from my Arduino to server written in NodeJS. I don't know if you have some skills on JS. You can see the cond on my github repo . I hope this solution can help you.

Asterisk REST ARI snoop (cURL)

I try to:
curl -v -u j123:j321 -X POST "http://localhost:8088/ari/channels/1421226074.4874/snoop?spy=SIP/695"
In response to receiving:
"message": "Invalid direction specified for spy"
I try to:
SIP/695; SIP:695, SIP#695, localhost#695, channel, channelName
It's all not working.
Call comes into the queue from sip-416 to queue_1 and distribute to 694. I need to connect 695 for wiretapping channel 1421226074.4874.
I only need to listen and not to whisper.
Help me please)
The error message is telling you what the problem is:
"message": "Invalid direction specified for spy"
The spy parameter is a direction for spying, not the channel to spy on (see reference documentation here). You've already specified the channel to snoop on in the URI path - you need to specify the direction of the media in the spy parameter.
As an aside, apparently the auto generated wiki isn't display enum values, which is unfortunate. We'll have to fix that.
For reference, here's the parameter in the Swagger JSON:
"name": "spy",
"description": "Direction of audio to spy on",
"paramType": "query",
"required": false,
"allowMultiple": false,
"dataType": "string",
"defaultValue": "none",
"allowableValues": {
"valueType": "LIST",
"values": [
"none",
"both",
"out",
"in"
]
}