ADMG0007E: The configuration data type ConfigSynchronizationService is not valid - deployment

I'm trying to automize WebSphere Deployment process for zero down-time using steps which you can find in this link..
According to documentation's first step , we should disable "Automatic Synchronization" for each node. To automize it, I'm using script which given in documentation but when I try to apply the command below :
set syncServ [$AdminConfig list ConfigSynchronizationService $na_id]
I'm facing with an error that : ADMG0007E: The configuration data type ConfigSynchronizationService is not valid
As I checked IBM documentations, I couldn't see any resource which referred to this problem.
Does anyone have any workaround or direct solution for it?
Thanks in advance for your suggestions.
PS: I should mention that document written for zOS system but I'm trying to apply that methodology to WebSphere AS on Linux.

Related

How can I set grafana 7 to work with warp10 2.7.x?

I've just install a fresh warp10 standalone server version 2.7.2.
Using beamium to send data into it and seeing the data via the VScode plugin is OK and I can see graph in GTS preview.
I've also installed a fresh grafana and warp10-plugin following the warp10 recommandation on the ovh github.
When execution a default warp10 query (via editor), grafana add some strings in the query, like the start or end value, so in the end the query look like:
1609947849757000 'start' STORE
'2021-01-06T15:44:09.757Z' 'startISO' STORE
1610034249757000 'end' STORE
'2021-01-07T15:44:09.757Z' 'endISO' STORE
86400000000 'interval' STORE
199538106 '__interval' STORE
199538 '__interval_ms' STORE
[ 'host.local.domain' ] 'host_list' STORE
But when executing, there is an error poping in the warp10 logs, after decryption it tell:
Exception at '=>1609947849757000<= 'start' STORE '2021-01-06T15:44:09.757Z'' in section [TOP]
It seems that it don't take the LONG (DOUBLE?) number for what it is, and try to match a function with the same name that don't exists.
On grafana side, I don't have any valuable help, it tell me:
WarpScript Failure on Line -1, Unable to read x-warp10-error-line and x-warp10-error-line headers in server answer
Do I miss something ?
==== Edit: 2021-01-07 17:32 UTC
After first reply, doing other test:
I've tryied the same query, and the error is still the same.
Warpscript failure
But in VScode this query works:
{
'token' $RTOKEN
'class' '~.*'
'labels' {}
'end' '2021-01-07T17:35:28.086Z'
'timespan' 21600000000
} FETCH
I've also try to use the bartender stuff in grafana and it work fine too...
So everything should work, I must miss the obvious.
Can Java version have an impact ?
If the datasource is working, you didn't miss anything.
Do you use the built in warpscript editor ? Make sure you ticked the "WarpScript Editor" checkbox:
Then, do the simplest FETCH request you can do, or copy paste the code you did in VSCode.
You can define your own variables too in the datasource configuration. It is usefull for tokens.
I just setup Grafana 7.3.6 + ovh plugin to check, it seems there is no regression issue.
The full WarpScript can be found in the browser console.
The header error is linked to Grafana 7, no problem with Grafana 6.x. Install Grafana 6.x if you want WarpSript detailed errors.
Edit: Issue is now fixed and merged into ovh github master.
If you want to test another Warp 10 data source, you can use the credential and advices in this article. You just need to go before covid lockdown to find relevant office beertender data...

Apache-Kafka: Issue in creating topic using windows cmd

I am new to Apache-Kafka.I am using:
zookeeper-3.6.0
kafka-2.13-2.4.1
Windows-7
Earlier i was able to create and list topics on the same machine.
After i restarted my system, i am unable to create and list topics.
I am getting below errors.
Create error:
Topic create error
List error:
List Topic Error
The error looks pretty simple. I googled a lot but unfortunately, i am not able to get a solution to this.
I followed the below link for configurations:
http://programming-tips.in/kafka-set-up-apache-kafka-on-windows/
Looking forward for some expert advice.
Type it out manually and it will work.
There is something wrong with the '-' in the example code.

IBM Integration Bus: The PIF data could not be found for the specified application

I'm using IBM Integration Bus v10 (previously called IBM Message Broker) to expose COBOL routines as SOAP Web Services.
COBOL routines are integrated into IIB through MQ queues.
We have imported some COBOL copybooks as DFDL schemas in IIB, and the mapping between SOAP messages and DFDL messages is working fine.
However, when the message reaches a node where a serialization of the message tree has to take place (for example, a FileOutput or a MQ request), it fails with the following error:
"The PIF data could not be found for the specified application"
This is the last part of the stack trace of the exception:
RecoverableException
File:CHARACTER:F:\build\slot1\S000_P\src\DataFlowEngine\TemplateNodes\ImbOutputTemplateNode.cpp
Line:INTEGER:303
Function:CHARACTER:ImbOutputTemplateNode::processMessageAssemblyToFailure
Type:CHARACTER:ComIbmFileOutputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_5
Label:CHARACTER:MyCustomFlow.File Output
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:2230
Text:CHARACTER:Caught exception and rethrowing
Insert
Type:INTEGER:14
Text:CHARACTER:Kcilmw20Flow.File Output
ParserException
File:CHARACTER:F:\build\slot1\S000_P\src\MTI\MTIforBroker\DfdlParser\ImbDFDLWriter.cpp
Line:INTEGER:315
Function:CHARACTER:ImbDFDLWriter::getDFDLSerializer
Type:CHARACTER:ComIbmSOAPInputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_7
Label:CHARACTER:MyCustomFlow.SOAP Input
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:5828
Text:CHARACTER:The PIF data could not be found for the specified application
Insert
Type:INTEGER:5
Text:CHARACTER:MyCustomProject
It seems like something is missing in my deployable BAR file. It's important to say that my application has the message flow and it depends on a shared library that has all the .xsd files (DFDLs).
I suppose that the schemas are OK, as I've generated them using the Toolkit wizard, and the message parsing works well. The problem is only with serialization.
Does anybody know what may be missing here?
OutputRoot.Properties.MessageType must contain the name of the message in the DFDL schema. Additionally when the DFDL schema is in a shared library, OutputRoot.Properties.MessageSet must contain the name of the library.
Sounds as if OutputRoot.Properties is not pointing at the shared library. I cannot remember which subfield does that job - it is either OutputRoot.Properties.MessageType or OutputRoot.Properties.MessageSet.
You can easily check - just check the contents of InputRoot.Properties after an input node that has used the same shared libary.
Faced a similar problem. In my case, a message flow with an HttpRequest node using a DFDL domain parser / format to parse an HTTP response from the remote system threw this error (PIF data could not be found for the specified application). "Re-selecting" the same parser domain & message type on the node followed by build / redeploy solved the problem. Seemed to be a project reference related issue within the IIB toolkit.
you need to create static libraries and refer to application.
in compute node ur coding is based on dfdl body

Installing openTSDB on Ubuntu15.04

I have installed openTSDB(.deb package) on ubuntu 15.04 by following the guidelines stated in documentation. when I give this command "service opentsdb start" it is not starting and it is mentioned in documentation that we have to change some configuration files.can anyone please tell me what are the changes that we have to do and in which file the changes have to be done?
Thanks in advance
Regards
VHC
Check your logs in /var/log/opentsdb/opentsdb.log. You should have HBase up&running correctly (means i.e. you are able to create table and store some values)
Remember you have to create tables in HBase running
env COMPRESSION=NONE HBASE_HOME=path/to/hbase-X.XX.X /usr/share/opentsdb/tools/create_table.sh
http://opentsdb.net/docs/build/html/installation.html#create-tables

Hortonworks-oozie

I am trying to run a workflow in hortonworks cluster using oozie.
Getting the following error:
Error: Invalid workflow-app, org.xml.sax.SAXParseException: cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element 'hive'.
does anyone know the reason?
Atleast a sample hive workflow.xml which can be run on hortonworks distribution would be helpful??
This has to do with the first line of your workflow:
<workflow-app name="${workflowName}" xmlns="uri:oozie:workflow:0.4">
Specifically: uri:oozie:workflow:0.4
the xmlns value tells oozie what xml pattern to follow. I am assuming you used an online resource to build an action, which maybe in a newer scheme than what you specified.
There are versions
-uri:oozie:workflow:0.1
-uri:oozie:workflow:0.2
-uri:oozie:workflow:0.2.5
-uri:oozie:workflow:0.3
-uri:oozie:workflow:0.4
See: Oozie Workflow Schemes
But Usually setting yours to the code example above (0.4) will work for all newer workflows.
Actions also have schemes so it is important to look at what functions they have in each version.
The hive action currently goes up to 0.5 I believe, although I use 0.4 with this line:
<hive xmlns="uri:oozie:hive-action:0.4">
If this does not help, please update the question with your workflow for further help.