Lagom - Could not find Cassandra contact points, due to: access denied - scala

Could not find Cassandra contact points, due to: Access Denied.
Your credentials could not be authenticated. Credentials are missing. Not sure how to fix this.
I am trying to run the lagom sample online-auction-scala.

Is your configuration routing localhost requests through a proxy? Can you disable that so that localhost requests stay local?
I don't know how your machine/proxy is configured, so I'm not sure what to do specifically. You might need to set the "http.nonProxyHosts" system property when launching sbt.
These links might be helpful:
http://docs.oracle.com/javase/8/docs/technotes/guides/net/proxies.html
http://www.scala-sbt.org/0.13/docs/Setup-Notes.html#HTTP%2FHTTPS%2FFTP+Proxy
http://objects-and-lambdas.com/learning-play-framework-2.html

Related

Getting Started With PeerJS

I am trying the simplest example I can, pulled directly from their website. Here is my entire html file, with code taken exactly from https://peerjs.com/index.html:
<script src="https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js"></script>
<script>
var peer = new Peer();
var conn = peer.connect('another-peers-id');
// on open will be launch when you successfully connect to PeerServer
conn.on('open', function(){
// here you have conn.id
conn.send('hi!');
});
</script>
In Chrome and Edge I get this in the console:
peerjs.min.js:64 GET https://0.peerjs.com/peerjs/id?ts=15956160926060.016464029424720694 net::ERR_CONNECTION_REFUSED
In Firefox I get this:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://0.peerjs.com/peerjs/id?ts=15956162489620.8436734374800061. (Reason: CORS request did not succeed).
What am I doing wrong?
#reyad has requested "a full trace of requests and responses". Here's what I see in my network tab in Firefox:
And here's Chrome:
And a tiny bit more Chrome:
[Note: It would have been better if you could provide a full trace of requests and responses. This problem may occur for several reasons. I'll state two solutions. So, try those. If those doesn't work, provide full trace of requests and responses.]
1. First Solution:
Sometimes, this type of error occurs because of self-signed certificate. To solve this problem, open developer tools/options, then go to network tab. You'll see a list of requests. Select the request which was failed because of CORS(i.e. which gave you this Reason: CORS request did not succeed). Open it(i.e. click it). If your problem is related to cert you'll see the following error message:
AN ERROR OCCURED: SEC_ERROR_INADEQUATE_KEY_USAGE
To solve this problem, go to url that is the reason of this problem and accept the certificate manually.
2. Second solution:
Check the request(which is the reason of CORS) in the network tab of developers tools/options(same as described in 1. First Solution). You'll find a Transferred column. See, what's written in the Transferred column of the failed request. If it is written Blocked By Some Ad-Blocker, then disable the Ad-Blocker. Your request will work fine.
[P.S.]: These solutions are proposed on assumptions. Hope these works. If these two do not work, then please provide more info about requests and responses. And also check this.
3. Third and final solution:
[Note: This solution may not solve your problem directly, but it'll give you alternative solution and also insight about what your problem is and how to work around it]
Before reading the solution below, read this to understand how Access-Control-Allow-Origin works(it is the reason for CORS error).
Let me first explain how peerjs works:
PEERJS works based on PEER ID. So, you've to get some PEER ID either from the PEERJS CLOUD SERVER or you've to provide yourself one in the PEER CONSTRUCTOR i.e. new Peer("some-peer-id"). Peer id has to be unique, cause its necessary to detect all the users uniquely. And, peerjs uses this PEER ID to send and receive data from user to user.
Now, you should know that, you're using PEERJS CLOUD SERVER to get/generate unique peer id which is the default server PEERJS uses unless you specified some other server to use.
Now let me explain why you're facing this problem:
As you already know how CORS works, you may have already guessed, that https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js(the downloaded js file) is calling https://0.peerjs.com to retrieve/generate new unique PEER ID. But, this request by https://your.website.com does not have Access-Control-Allow-Origin access for some reason, it may also be a middleware problem. So, its difficult to tell where the problem is actually occuring. But one thing for sure, it's not your fault of writing code :D.
I hope all the concepts is clear to you I've stated above.
Now, to solutions:
Alternative-appraoch-1 (Using PEERJS CLOUD SERVER AND Your own provided id):
In this approach you've to generate your own unique PEER ID. So, "https://your.website.com" does not have to call "https://0.peerjs.com" for unique peer id. [Note: make your peer id large enough so that its always unique, at least 64 chars long]
In this way, you can avoid the CORS problem.
Update:
I just saw an new issue in github, which says the public peerjs cloud server is now unstable or does not work properly. It just gives error like: Firefox cannot establish a connection with the server at the address wss://0.peerjs.com/peerjs?key=peerjs&id=123222589562487856955685485555&token=ocyxworx62i and in Chrome: Error in connection establishment: net::ERR_CONNECTION_REFUSED. For details check here. So, its better, you use your own server(see the next approach).
Alternative-appraoch-2 (Using your own peerjs server):
You can host your own peerjs server instead of PEERJS CLOUD SERVER. In this way, you can allow access to anyone/any website you want. If you want know how to host a peerjs server, you may visit here.
[P.S.]: I have studied pearjs issues in github. After reading all those issues, it seems, it is better to use your own server rather than using pearjs cloud. There are a lot of various problems with each new release of peerjs. And mostly related with connection with peerjs cloud and also peerjs cloud is not stable I guess. They were hosting it in 0.peerjs.com:9000 before(not secure). But now in 0.peerjs.com:443.
I haven't use peerjs before nor set up peerjs server. If you want to set up one, I hope the community would be able help you on how to do that properly.
What I understand from your question is that there is an issue of (CORS => Cross-origin resource sharing ), Maybe what I am suggesting is not very intuitive.
First : download the "https://unpkg.com/peerjs#1.3.1/dist/peerjs.min.js" in your local directory . and then incklude the local javascript code to the html.
like: <script src="./peerjs.min.js"></script>
Second :
you are using var peer = new Peer();
but please provide an extra unique id from your side. for example, I just created a random id and provided it.
StackOverflow link: https://stackoverflow.com/questions/21216758/peerjs-set-your-own-peerid#:~:text=1%20Answer&text=Provide%20a%20peer%20id%20when,to%20under%20Create%20a%20peer.
var a_random_id = Math.random().toString(36).replace(/[^a-z]+/g, '').substr(2, 10);
var peer = new Peer(a_random_id, {key: 'myapikey'});
Third : the best option is to run PeerServer: A server for PeerJS of your own.
If you don't want to develop anything, just enter a few commands below.
Install the package globally:
$ npm install peer -g
Run the server:
$ peerjs --port 9000 --key peerjs --path /myapp
Started PeerServer on ::, port: 9000, path: /myapp (v. 0.3.2)
Check it: http://127.0.0.1:9000/myapp It should return JSON with name, description, and website fields.
details:https://github.com/peers/peerjs-server

How do I Re-route Ghost Blog Admin URL without modifying the API Address?

Ghost blog platform has a setting that allows you to change the admin panel login location (which starts as: https://whateveryoursiteis.com/ghost). Methodology / docs for changing that setting can be found here: https://ghost.org/docs/config/#admin-url
However — when using the above methodology the API Url that is used for Search etc etc is ALSO modified meaning all requests to the ghost API will also be forwarded to the alternate domain (not just the admin access).
My question is — what is the best way to achieve a redirect of the admin URL to a different Domain / protocol while allowing the API url used by Ghost to remain the same?
More background.
We are running ghost on top of GKE (Google Kubernetes Engine) on a Multi-Region Ingress which allows us to dump our CloudSQL DB down to a SQLite file and then build that database into our production Docker Containers which are then deployed to the different Kubernetes nodes that are fronted by the GCE-Ingress load balancer.
Since we need to rebuild that database / container on content change (not just on code change) we need to have a separate Admin URL backed by Cloud SQL where we can persist / modify our data which then triggers the rebuild on our Ci pipeline via Ghost Webhooks.
Another related question might be:
Is it possible to use standard ghost redirects (created via: https://docs.ghost.org/concepts/redirects/) to redirect the admin panel URL (ie. https://whateveryoursiteis.com/ghost) to a different domain (ie. https://youradminsite.com/ghost)?
Another Related GKE / GCE-Ingress Question:
Is it possible to create 301 redirects natively using Kuberentes GCE-Ingress on GKE without adding an nGinx container etc?
That will be my first attempt after posting this — but I figured either way maybe it helps another ghost platform fan down the line someplace — I will attempt to respond back as I find answers to those questions (assuming someone doesn't beat me to it!).
Regarding your question if it's possible to create 301 redirects without adding a nginx container, I can suggest to use istio, find out more information about traffic routing here.
OK. So as it turns out the Ghost team currently has things setup to point API connections at the Admin URL. So if you change your Admin URL expect your clients to attempt to connect to that URL.
I am going to be raising the potential of splitting these off as a feature request over on the ghost forums (as soon as I get out from under pre-launch hell on the current project).
Here's the official Ghost response:
What is referred as 'official docker image' is not something that we
as a Ghost team support.
The APIs are indeed hosted under the same URL as the admin and that's
by design and not really a bug. Introducing configuration options for
each API Ghost instance hosts would be a feature and should be
discussed at our forum first 👍 I think it's a nice idea to be able to
serve APIs from different host, but it's not something that is within
our priorities at the moment.
In case you need more granular handling of admin site, you could
introduce those on your proxy level and for example, handle requests
that are coming to /ghost/api with a different set of rules.
See the full discussion over here on the TryGhost GitHub:
https://github.com/TryGhost/Ghost/issues/10441#issuecomment-460378033
I haven't looked into what it would take to implement the feature but the suggestion on proxying the request could work... if only I didn't need to run on GKE Multi region (which requires use of GCE-Ingress which doesn't have support for redirection hah!). This would be relatively easy to solve the nGinx ingress.
Hopefully this helps someone — I will update as I work through the process. As of now I solved it by dumping my GCP CloudSQL database down to a SQLite db file during build time (thereby allowing me to keep my admin instance clean and separate from the API endpoint — which for me remains the same URL).

AEM Error with ExternalLoginModule

I created author AEM6 on localhost:4504.
When I load any page on the server, I have a lot of the following errors:
org.apache.jackrabbit.oak.spi.security.authentication.external.impl.ExternalLoginModule No IDP found with name cortexCSR. Will not be used for login.
org.apache.jackrabbit.oak.spi.security.authentication.external.impl.ExternalLoginModule No IDP found with name cortex. Will not be used for login.
org.apache.jackrabbit.oak.spi.security.authentication.external.impl.ExternalLoginModule No IDP found with name ldap. Will not be used for login.
Does anyone know how to fix this problem?
It sounds like you may have an instance that is configured for LDAP authentication. Check these URLs to see if that is the case.
Go to http://localhost:4504/system/console/configMgr and search for "ExternalLoginModule" or "org.apache.jackrabbit.oak" and then edit the config to see what is set for any items you find. It sounds like you have an ExternalLoginModuleFactory configured to look for an LDAPIdentityProvider that hasn't been configured. Most likely you need to add the configuration for the providers. See https://docs.adobe.com/docs/en/aem/6-0/administer/security/ldap-config.html for info on how to configure those. It could be that there is an OSGI config file that is runmode specific, so if your localhost isn't running with the same runmode it would not have applied the configuration in that case.
Also see http://abani-behera.blogspot.com/2014/07/ldap-integration-with-aem6-osgi-config.html for more details.

Mulesoft - Uh-oh spaghettios! There's nothing here

This error is driving me nuts...
Situation:
I am trying to create a REST api and use a api-gateway proxy to access it. Proxy URL is HTTPS.
The deployment goes through fine. No errors reported in the logs. Worker assigned.
However when I try to access through browser get the "Uh-oh spaghettios! There's nothing here.".
Have tried all the usual things like making the https port dynamic using ${https.port} and using 0.0.0.0 instead of localhost in the http-listener config. But that does not help. Has this something to got to do with the proxy version ?
Any help or pointers will be great!
Make sure you follow Steps 2 from below link
Getting Started with Connectors
All,
Got the resolution. The problem was with the certificate chain. The keystore did not contain intermediate certificates. When added to the keystore the connectivity worked fine.
Only if Mulesoft provided correct errors or detailed logging, I would have saved lot of time over this.
Thanks for your inputs.

Proxy URL 'incache....com:8080' does not contain a valid hostname

Recently I was forced to switch from SVN to TFS.
I'm trying to get this working with TEE on our RedHat box.
Any action seems to end with something like this:
user#rh: tf -map $/XX/XX . -workspace:app-job -server:http://tfs.domain.com:8080/tfs/TFS2008/ -profile:TFS1_PRF_C
Password:
An error occurred: Proxy URL 'incache.domain.com:8080' does not contain a valid hostname.
Could someone help with that?
Your question is a little vague about what you expect to happen here (are you supposed to be using an HTTP proxy to access your TFS server? Or is the problem that it's assuming your HTTP proxy?)
I'm going to assume that you do not need to use an HTTP proxy to access your internal TFS server, since in most corporate environments your proxy is used to get outside the network, not inside. By default, the Team Explorer Everywhere CLC does try to use your system HTTP proxy, however this is configurable in your connection profile.
In order to override your default system HTTP proxy for that profile, you can set the profile property httpProxyIgnoreGlobal to true:
tf profile -edit -boolean:httpProxyIgnoreGlobal=true TFS1_PRF_C