'Add Service' Disabled on Ambari 1.6.0 Sandbox - apache-kafka

I'm experimenting with Ambari in the Hortonworks sandbox, and have hit a roadblock. When trying to add kafka in order to do real time processing, the 'Add Service' dropdown in Ambari is disabled. It seems to be a known bug. I followed the fix suggestion here: http://docs.hortonworks.com/HDPDocuments/Ambari-1.5.1.0/bk_releasenotes_ambari_1.5.1/content/ch_relnotes-ambari-1.5.1.0-knownissues.html, which completed, but did not fix the issue. After that, I tried upgrading my version of Ambari from 1.5.0 to 1.6.0, which also completed, but similarly did not fix the issue. I have restarted my VM, cleared the cache on my browser, and seem to have run out of options.
Is there anything else I can try to get this working?

Related

moodle - database connection failed when running sync.php script

I am setting up a moodle plugin for enrolment from an external database based on instruction in https://docs.moodle.org/39/en/External_database_enrolment. I was successful in setting up a similar plugin for authentication from an external database. But I am having a problem with a plugin for enrolment from the external database. When I test the plugin settings it is successful as shown below https://i.stack.imgur.com/fdrqG.png
But When I run php sync.php I got
Error: Database connection failed
It is possible that the database is overloaded or otherwise not running properly.
The site administrator should also check that the database details have been correctly specified in config.php
My moodle version is 3.9.1 and I am using Mac OS.
Thank you in advance.
First, turn on debug, to get errors.
Finally found that the problem was, the plugin do not work with PHP7.4 (while Moodle do).
Fixed by downgrading PHP to 7.2

in storm 1.2.2 deployment, ui is stuck in "loading summary"

I tried to deploy apache storm for a streaming program development. 4 servers for 1 nimbus/ui and 3 supervisors. After all setup, nimbus/ui/supervisors are launched without errors/exceptions reported in logs. But when access to ui via browser, the page is stuck in a spinning prompted "loading summary" like for ever.
All processes are killed and zookeeper node "/storm" and storm data folder are cleared. Still stuck on ui after restart the cluster.
Also switched to lower rev 1.1.3, still same problem, looks like not a bug specific to rev, more like a bug relevant to my setting. But really confusing to figure out. Anyone familiar with storm can help me on this?

Unable to Finish connecting to SonarQube server

This is going to sound like a ridiculous question, but using the SonarLint Eclipse plugin (v3.2.0) on the latest Eclipse (Oxygen), I am unable to add a new SonarQube server connection.
I am working behind a company firewall, but that doesnt appear to be an issue. I am following the steps here and am able to successfully connect to our internal SonarQube instance, provide my credentials, but it is just on the final step, that the 'Finish' button does not seem to do anything, see screen below:
I appreciate there is probably some background processes need to run in order for this Finish to actually finish :) But this doesnt appear to be doing anything...Anyone else experience this issue?
Any before people ask, I've restarted Eclipse/laptop, uninstalled and reinstalled SonarLint plugin etc.
Thanks in advance!
SonarLint in Eclipse are storing credentials in Eclipse secure storage that itself is protected by a master password. So you must reset it or delete it to add a new SonarQube server connection. You can try this step :
In your Eclipse Go to Window > Preferences, filter and find Secure Storage.
In the Tab Contents find and highlight org.sonarlint.eclipse.core, click Delete > Apply > OK. After deletion process is finished, Eclipse will ask if you want to restart the IDE. It is strongly recommended that you restart the IDE and try again to adding SonarQube server in Eclipse.
Thanks.
On my linux machine I had the same issue, because the used master password provider doesn't work properly.
This answer worked for me:
Open Window > Preferences
Go to General > Security > Secure Storage
At Master password providers uncheck the used provider. The enabled provider with the highest priority is the used one [for me it was "Linux Integration (64 bit)"].
Click apply
I also encountered this problem, but was able to work around it.
This is environment in which I was running:
Eclipse Oxygen.1
Linux VM (VirtualBox) on Windows host
Solution that worked for me based on this post:
Uninstall SonarLint.
Reinstall using Help -> Install new software...
On the Install dialog, un-check the option "Show only the latest versions of available software"
Select the older version of SonarLint.
Select Next and continue with the install.
After installing, configure your SonarQube server like normal.
Upgrade to the latest version of SonarLint via Help -> Check for Updates
In my case the problem also concerned the credentials storage but was caused by the Avecto Defendpoint Client. The company restricted the permissions to create subfolders in user home (c:\users<username>). I had to create manually missing subfolders (.sonarlint and .eclipse) after access level elevation and after filling the reason in a text field. Then I had to give the permissions to those folders for myself. Having created them I could proceed with adding server to sonarlint plugin.

How to create openshift application for OPENSHIFT ONLINE 3 STARTER (NEW!) server in Eclipse IDE?

I am trying to create an openshift3 application in Eclipse IDE after installing JBoss Developer tool plugin in IDE, But getting below error at the time Sign into OpenShift.
Error: The server type, credentials, or auth scheme might be incorrect:
I have also tried other server hostname like https://console.starter-us-east-1.openshift.com/console/ and much more, but still not working.
While, when I tried to log in using OC tool (OpenShift CLI) with the same credential (as seen in picture), I haven't got any error.
I also tried to run RHC (OpenShift Client Tool) but at the time of RHC setup it is saying "You are not authorized to perform this operation."
Please help me to solve it out.
First of all, it looks like you're using an outdated version of the JBoss Tools Openshift plugin, because the "New Openshift Application" wizard looks a little bit different at the moment. So try to update it:
Help -> About Eclipse -> Installation Details -> Update... - and choose at least all the JBoss Tools plugins that it'll report to you (the best will be to choose everything reported) and update them.
Secondly, what is the URL which you use to access the Openshift web console in your browser? It seems to me that it is https://console.starter-us-east-1.openshift.com. Are you able to login there with your credentials? If yes, the same must work in JBoss Tools Openshift plugin. Check this and this articles for more info about using it.

No error message on "The PowerShell script failed to execute", Service Fabric app upgrade from VS15

I used to be able to do upgrade, and it suddenly wont work, not to cloud and not locally.
And there is no error message, so I have no clue what to do.
Publishing a new application works fine.
There seems to be something corrupt with the solution, but I have no idea where I should be looking for this.
In a brand new project, publishing and upgrading a stateful service from template, works without problem.
Things I have done:
Cleaned solution.
Rebuilt solution.
Deleted Debug folders in bin and obj folders.
Restarted Visual Studio.
Cleared MEF Component Cache.
Restarted machine.
Restarted cluster.
The brand new project is deployed to same cluster and upgrade it is working, so I have not deleted the cluster to deploy a new one.
There is no error being thrown so enclosing the script in a try catch seems pointless.
What else can I do here, what can I do to try find out what is going wrong, any suggestions?
I found out it might be a bug. It is reported here:
https://github.com/Azure/service-fabric-issues/issues/240
Summary: When deployed with compression, further upgrades require config version bump, even though config has not changed.