Should I remove comments in the .ini files in Grafana? - visual-studio-code

I am trying to work on Grafana as code. And I have installed Grafana on my windows 10 machine. Now I wnat to configure Grafana in the Configuration files. This page ( Grafana Configuration)
says that I have to uncomment all the ini. files. But somethings are not necessary. Should I still uncomment every line? thanks for your answer

You don't need to uncomment. There are default values usually. For example Grafana port 3000.
You need to uncomment only if you want to change those default values. For example if you want to use different Grafana port, e. g. 5000.

Related

How can I set grafana 7 to work with warp10 2.7.x?

I've just install a fresh warp10 standalone server version 2.7.2.
Using beamium to send data into it and seeing the data via the VScode plugin is OK and I can see graph in GTS preview.
I've also installed a fresh grafana and warp10-plugin following the warp10 recommandation on the ovh github.
When execution a default warp10 query (via editor), grafana add some strings in the query, like the start or end value, so in the end the query look like:
1609947849757000 'start' STORE
'2021-01-06T15:44:09.757Z' 'startISO' STORE
1610034249757000 'end' STORE
'2021-01-07T15:44:09.757Z' 'endISO' STORE
86400000000 'interval' STORE
199538106 '__interval' STORE
199538 '__interval_ms' STORE
[ 'host.local.domain' ] 'host_list' STORE
But when executing, there is an error poping in the warp10 logs, after decryption it tell:
Exception at '=>1609947849757000<= 'start' STORE '2021-01-06T15:44:09.757Z'' in section [TOP]
It seems that it don't take the LONG (DOUBLE?) number for what it is, and try to match a function with the same name that don't exists.
On grafana side, I don't have any valuable help, it tell me:
WarpScript Failure on Line -1, Unable to read x-warp10-error-line and x-warp10-error-line headers in server answer
Do I miss something ?
==== Edit: 2021-01-07 17:32 UTC
After first reply, doing other test:
I've tryied the same query, and the error is still the same.
Warpscript failure
But in VScode this query works:
{
'token' $RTOKEN
'class' '~.*'
'labels' {}
'end' '2021-01-07T17:35:28.086Z'
'timespan' 21600000000
} FETCH
I've also try to use the bartender stuff in grafana and it work fine too...
So everything should work, I must miss the obvious.
Can Java version have an impact ?
If the datasource is working, you didn't miss anything.
Do you use the built in warpscript editor ? Make sure you ticked the "WarpScript Editor" checkbox:
Then, do the simplest FETCH request you can do, or copy paste the code you did in VSCode.
You can define your own variables too in the datasource configuration. It is usefull for tokens.
I just setup Grafana 7.3.6 + ovh plugin to check, it seems there is no regression issue.
The full WarpScript can be found in the browser console.
The header error is linked to Grafana 7, no problem with Grafana 6.x. Install Grafana 6.x if you want WarpSript detailed errors.
Edit: Issue is now fixed and merged into ovh github master.
If you want to test another Warp 10 data source, you can use the credential and advices in this article. You just need to go before covid lockdown to find relevant office beertender data...

Grafana custom data source

help me with this fairly new to Grafana
Things accomplished so far:
1. Managed to get a JSON output from CSV (basically looking to plot transaction availability based on success/ fail criteria)
2. Wanted a Prometheus source imported
Can you help me how I proceed in either building a Graphite dashboard (or) Prometheus? Much appreciated. thanks
In order to use Prometheus datasource, you need to follow below steps
Start Prometheus server. By default, it listens at 9090 port.
login to Grafana console, go to option - Add datasource
select Prometheus as type of datasource from the drop down.
Insert the Prometheus URL (You got this in step 1)
Access should be "direct" if you aren't using any proxy to establish connection.
Hit save & test.

How do I configure a webserver for a collective in Bluemix?

I found doc that indicates I need to setup a webserver in my collective environment, however, I cannot determine the correct set of steps. Thoughts?
It would help to see what you've already tried, but consider the following:
Create two or more servers on one or more of the hosts and join them to the collective. Make sure your servers are clusterMembers and collectiveMembers. The following post should help with creating servers and joining them to the collective:
How can I setup a cell and collective in Bluemix
Update the controller's /etc/hosts file with the hostnames of all the hosts in the collective.
Download and follow this guide to generate the plugin-cfg.xml file on the controller:
https://developer.ibm.com/wasdev/downloads/#asset/scripts-jython-Generate_Cluster_Plugin
Copy the generated plugin-cfg.xml file to /opt/IBM/WebSphere/HTTPServer/conf
Edit /opt/IBM/WebSphere/HTTPServer/conf/httpd.conf and uncomment these two lines at the bottom of the file:
LoadModule was_ap22_module /opt/IBM/WebSphere/Plugins/bin/64bits/mod_was_ap22_http.so
WebSpherePluginConfig /opt/IBM/WebSphere/Profiles/Liberty/servers/controller/pluginConfig/myLibertyCluster-plugin-cfg.xml
Change the WebSpherePluginConfig value to be /opt/IBM/WebSphere/HTTPServer/conf/plugin-cfg.xml
Stop and start the HTTP server
sudo ./apachectl stop
sudo ./apachectl start
Verify the application can be reached using the webserver <webserverIP>:80/appname
Generate the plugin again if the application is added or removed.

ElasticSearch with Play 2 configuration

I am trying to use the ElasticSearch module (https://github.com/cleverage/play2-elasticsearch) with my Play 2 application. In the readme, it says I should add the following to my application.conf:
## define local mode or not
elasticsearch.local=false
## list clients
elasticsearch.client="192.168.0.46:9300"
# ex : elasticsearch.client="192.168.0.46:9300,192.168.0.47:9300"
What is local mode? What is my client URL supposed to be? I can not find any information on what these options should be. With my current options, I get a NoNodeAvailableException.
Some people suggest:
elasticsearch.local=false elasticsearch.client=mynode1:9200,mynode2:9200
But what is mynode1 and mynode2? It doesn't work with my application. Can anyone help? Thanks
What is local mode?
If elaticsearch.local=true, a elasticsearch node is started in your application ( embedded )
What is my client URL supposed to be?
It's your host:port, but the port is the tcp transport define on your elasticsearch node.
By default, the port start on 9300 ( http://www.elasticsearch.org/guide/reference/modules/transport.html )
I can not find any information on what these options should be. With my current options, I get a NoNodeAvailableException.
I think you have a problem on port number.
mynode1 and mynode2 are elasticsearch nodes.
Do you have any Elasticsearch node running?
On which IP address?
Can you try to connect on these nodes using curl, for example:
curl localhost:9200
Or
curl YOURIPADDRESS:9200
If one of this is successful, then configure your play app using YOURIPADDRESS:9300 as Nicolas Boire wrote before.
If no one is successful, check that you have installed Elasticsearch and launched it before.
HTH
I've just had the same problem, be sure that you respect the version requirements written in the table : https://github.com/cleverage/play2-elasticsearch
At the beginning, I set up the latest version of the plugin 0.8.1 but my ElasticSearch version was 1.0.2.
By starting ES with version 0.9.13, it worked.

haproxy - which configuration files

I have an HAProxy install which was configured by someone who left the company. It runs on Ubuntu 10.04 and it seems to use 3 configuration files in the directory /etc/haproxy
haproxy.cfg
haproxy.http.cfg
haproxy.https.cfg
I don't see the point in using the haproxy.https.cfg file as I believe (in our configuration) it can all be configured from a single haproxy.http.cfg file but when I remove that httpS file it complains bitterly and refuses to run. My question
Is this the standard configuration haproxy uses or if not, I can't find a reference to the "S" file anywhere. Can anyone suggest how HAProxy concludes it should use it?
Thanks
The very answer to your question: your haproxy is simply launched with those three config files ( -f haproxy.cfg -f haproxy.http.cfg -f haproxy.https.cfg, maybe from /etc/init.d/haproxy but mileage varies depending on your distribution ).
If you remove the file, of course it will complain.
This is not particularly standard, but ain't bad either, it helps structuring the conf rather than having a very long file.
The task of the .https version will certainly be to redirect the https traffic towards a service that can handle HTTPS (stunnel or nginx usually), since haproxy cannot terminate ssl connections. (stunnel has to be patched, see on the haproxy page)
If you want you can merge those files into one or two, just find out how haproxy is launched (check for init.d or let us know which distribution) and fix it appropriately.
I believe that it is only /etc/haproxy/haproxy.cfg that is used by default.
This may be of use to you (1.4 configuration reference):
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt