KeyCloak Admin Console Just Spins - keycloak

Not sure if anyone can help me here, but you all have never failed in the past. I was tasked with figuring out how to configure KeyCloak with an application my company uses. It took me forever to figure out, but I finally have KeyCloak installed, and I'm able to hit it at https://localhost:8443. It allowed me to set up an admin user, but when I click to go to the administration console, it just spins. I'm running KeyCloak 19.0.1 on Windows Server 2019. When I run the developer console in Chrome I can see:
{error: 'Timeout when waiting for 3rd party check iframe message.'}
error
:
"Timeout when waiting for 3rd party check iframe message."
[[Prototype]]
:
Object
constructor
:
ƒ Object()
hasOwnProperty
:
ƒ hasOwnProperty()
isPrototypeOf
:
ƒ isPrototypeOf()
propertyIsEnumerable
:
ƒ propertyIsEnumerable()
toLocaleString
:
ƒ toLocaleString()
toString
:
ƒ toString()
valueOf
:
ƒ valueOf()
__defineGetter__
:
ƒ __defineGetter__()
__defineSetter__
:
ƒ __defineSetter__()
__lookupGetter__
:
ƒ __lookupGetter__()
__lookupSetter__
:
ƒ __lookupSetter__()
__proto__
:
(...)
get __proto__
:
ƒ __proto__()
set __proto__
:
ƒ __proto__()```
It doesn't help that pretty much virtually all of the troubleshooting steps I'm finding are for Linux and don't seem to make a lot of sense for what I'm looking at. Anybody have any suggestions?

Related

Can anyone help me with this error code in Data Fusion

I'm having a go at creating my first data fusion pipeline.
The data is going from Google Cloud Storage csv file to Big Query.
I have created the pipeline and carried out a preview run which was successful but after deployment trying to run resulted in error.
I pretty much accepted all the default settings apart from obviously configuring my source and destination.
Error from Log ...
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Required 'compute.firewalls.list' permission for
'projects/xxxxxxxxxxx'",
"reason" : "forbidden"
} ],
"message" : "Required 'compute.firewalls.list' permission for
'projects/xxxxxxxxxx'"
}
After deployment run fails
Do note that as a part of creating an instance, you must set up permissions [0]. The role "Cloud Data Fusion API Service Agent" must be granted to the exact service account, as specified in that document, which has an email address that begins with "cloud-datafusion-management-sa#...".
Doing so should resolve your issue.
[0] : https://cloud.google.com/data-fusion/docs/how-to/create-instance#setting_up_permissions

XS Project Share SAP HANA cannot see in browser

I have project with XS project, I already shared to HANA packages but failed when show to browser, the error show:
404 - Not found
We could not find the resource you're trying to access.
It might be misspelled or currently unavailable.
My .xsaccess:
{
"exposed" : true,
"authentication" : [{"method":"Basic"}],
"cache_control" : "no-cache, no-store",
"cors" : {
"enabled" : false
}
}
.xsapp:
{}
xsprivileges:
{
"privileges" : [
{ "name" : "ProfileOwner", "description" : "Profile Ownership" }
]
}
and one question, is it possible the problem because the role user or privileges user, about authorization? How to fix this issue? thanks
The .xsapp should be a empty file with no content in it. The exposed parameter in the .xsaccess should be enough to expose your project. Make sure that all files are activated in the HANA repository.
If the error was authorization specific you would get a 503 error. If the 404 error is a XSEngine page, either your code isn't activated or the package path is incorrect.

Spark REST API difficulties in understanding, goal sending RESTful messages from webpage

For a project I would like to run Spark via a webpage. Here the goal is to submit dynamically submission requests and status updates. As inspiration I used the following weblink: When asking for
I am sending a REST request for checking spark submission after submitting the below Spark request: http://arturmkrtchyan.com/apache-spark-hidden-rest-api
The Request code for a Spark job submission is the following:
curl -X POST http://sparkmasterIP:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{
"action" : "CreateSubmissionRequest",
"appArgs" : [ "/home/opc/TestApp.jar"],
"appResource" : "file:/home/opc/TestApp.jar",
"clientSparkVersion" : "1.6.0",
"environmentVariables" : {
"SPARK_ENV_LOADED" : "1"
},
"mainClass" : "com.Test",
"sparkProperties" : {
"spark.driver.supervise" : "false",
"spark.app.name" : "TestJob",
"spark.eventLog.enabled": "true",
"spark.submit.deployMode" : "cluster",
"spark.master" : "spark://sparkmasterIP:6066"
}
}'
Response:
{
"action" : "CreateSubmissionResponse",
"message" : "Driver successfully submitted as driver-20170302152313-0044",
"serverSparkVersion" : "1.6.0",
"submissionId" : "driver-20170302152313-0044",
"success" : true
}
When asking for the submission status there were some difficulties. To request the submission status I used the submissionId displayed in the response code above. So the following command was used:
curl http://masterIP:6066/v1/submissions/status/driver-20170302152313-0044
The Response for Submission Status contained the following error:
"message" : "Exception from the cluster:\njava.io.FileNotFoundException: /home/opc/TestApp.jar denied)\n\tjava.io.FileInputStream.open0(Native Method)\n\tjava.io.FileInputStream.open(FileInputStream.java:195)\n\tjava.io.FileInputStream.<init>(FileInputStream.java:138)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:124)\n\torg.spark-project.guava.io.Files$FileByteSource.openStream(Files.java:114)\n\torg.spark-project.guava.io.ByteSource.copyTo(ByteSource.java:202)\n\torg.spark-project.guava.io.Files.copy(Files.java:436)\n\torg.apache.spark.util.Utils$.org$apache$spark$util$Utils$$copyRecursive(Utils.scala:540)\n\torg.apache.spark.util.Utils$.copyFile(Utils.scala:511)\n\torg.apache.spark.util.Utils$.doFetchFile(Utils.scala:596)\n\torg.apache.spark.util.Utils$.fetchFile(Utils.scala:395)\n\torg.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:150)\n\torg.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:79)",
My question is how to use such an API, in such a way that the submission status can be obtained. If there is another API where the correct status can be obtained, then I would like a short description of how this API works in a RESTful way.
Thanks
As noted down in the comments of the blog http://arturmkrtchyan.com/apache-spark-hidden-rest-api , some more commenter's are experiencing this problem as well. Here below I will try to explain some of the possible reasons.
It looks like your file:/home/opc/TestApp.jar is not found/denied. This might be because of the directory cannot be found (access denied,cannot find). This is likely because it is not there on all nodes and the Spark submit is in cluster mode.
As noted in the Spark documentation for application jar Spark documentation. Application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.
To solve this one of the recommendations I can do is to execute the command using spark-submit. More information about spark-submit can be found at Spark submit and a book by Jacek Laskowski
spark-submit --status [submission ID] --master [spark://...]

how to DIAGNOSE a db2 "SQL0964C" The transaction log for the database is full

I know how to RESOLVE the problem, but I do not have any idea, how to find the cause/source (f.e. which statement) of the problem. Where (table, tools, commands) to look.
can I see something in the excerpt from db2diag.log?
2015-06-24-09.23.29.190320+120 ExxxxxxxxxE530 LEVEL: Error
PID : 15972 TID : 1 PROC : db2agent (XXX) 0
INSTANCE: db2inst2 NODE : 000 DB : XXX
APPHDL : 0-4078 APPID: xxxxxxxx.xxxx.xxxxxxxxxxxx
AUTHID : XXX
FUNCTION: DB2 UDB, data protection services, sqlpgResSpace, probe:2860
MESSAGE : ADM1823E The active log is full and is held by application handle
"3308". Terminate this application by COMMIT, ROLLBACK or FORCE
APPLICATION.
The db2diag.log shows you the agent ID (application handle) of the application causing the problem (3308).
Provided you are seeing this in real time (as opposed to looking at db2diag.log after the fact), you can:
Use db2top to view information about this connection
Query sysibmadm.snapstmt (looking at stmt_text and agent_id)
Use db2pd -activestatements and db2pd -dynamic (keying on AnchID and StmtUID
Use good old get snapshot for application
There are also many 3rd party tools that can also give you the information you are looking for.

When using PowerShell 'Invoke-WebRequest' with SSL, how can the 'session' be deleted on the client side after disconnect?

Good morning,
I am using PowerShell to interact with the VMWare vCloud API and am having problems after disconnecting from vCloud.
The process for connecting and using the API is as follows:
Connect to vCloud using a POST
Perform necessary operations
Disconnect using DELETE (This clears down the session at the remote endpoint)
All communication is over HTTPS.
The problem is that when this process run for the first in a new PowerShell console everything is OK. However if I want to repeat this process again I invariably get the following message:
Invoke-WebRequest : The underlying connection was closed: An unexpected error occurred on a send.
At C:\tfs\poshvcloud\solutions\poshvcloud\functions\helpers\_Invoke- vCloudRequest.ps1:95 char:17
+ $response = Invoke-WebRequest #splat
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
I have used Fiddler in an attempt to find out what is going on, and interestingly I only ever see 'CONNECT' operations and no attempt to even communicate with vCloud when the above error is displayed.
Indeed I have checked the ServiceEndPoint using [System.Net.ServicePointManager]::FindServicePoint("https://vcloud.example.com/") and I get an object back and it states I do not have any connections, but for some reason the connection is not re-established when I try to connect again. It is as if PowerShell does not bother with SSL handshake again.
BindIPEndPointDelegate :
ConnectionLeaseTimeout : -1
Address : https://vcloud.example.com/
MaxIdleTime : 100000
UseNagleAlgorithm : True
ReceiveBufferSize : -1
Expect100Continue : False
IdleSince : 18/07/2014 09:11:34
ProtocolVersion : 1.1
ConnectionName : https
ConnectionLimit : 2
CurrentConnections : 0
Certificate : System.Security.Cryptography.X509Certificates.X509Certificate
ClientCertificate :
SupportsPipelining : True
One weird thing is that if I attempt to do another operation quickly enough it does go through, but then eventually times out and I get the error.
The only way I can this to work again is to close down my current PowerShell console and start a new one and then load all the necessary modules.
I have tried to 'DisableKeepAlive' on the Inovke-WebRequest but this causes the API calls to fail completely. I then tried to just do it on the disconnect but this did not work either.
I am thinking that I might need to write my own version Invoke-WebRequest, but I still think I am going to have to clear down things and I not sure how. I would rather stick with Invoke-WebRequest if I can.
I know that there is PowerCLI from VMWare that takes care of all of this, however that is a huge module and only a fraction of it is applicable to vCloud. Indeed I started off using it, but due to some inconsistencies and a large dependency I have created my own vCloud module talking to the vCloud REST API directly.
I hope this is enough information, but please let me know if more is required. Any help is greatly appreciated.
Kind regards, Russell
$result = Invoke-WebRequest $adress -Method $Method
$result.BaseResponse.Close()