LaunchConfiguration it's just stalling (IAM policy is having Admin priviledge) with 2 configSets in Cloudformation - aws-cloudformation

Let me know how do I delete the stack after TTL. Seems 2 configsets are not working..
"All" : [ "ConfigureSampleApp", "ConfigureTTL" ]

Related

Deploying custom Keycloak theme with Operator (v15.1.1 & v16.0.0)

I have a theme with a size >1MB (which precludes the configmap-solution provided as an answer to this question).
This theme has been been packaged according to the Server Development Guide - its folder structure is
META-INF/keycloak-themes.json
themes/[themeName]/login/login.ftl
themes/[themeName]/login/login-reset-password.ftl
themes/[themeName]/login/template.ftl
themes/[themeName]/login/template.html
themes/[themeName]/login/theme.properties
themes/[themeName]/login/messages/messages_de.properties
themes/[themeName]/login/messages/messages_en.properties
themes/[themeName]/login/resources/[...]
The contents of keycloak-themes.json are
{
"themes": [{
"name" : "[themeName]",
"types": [ "login" ]
}]
}
where [themeName] is my theme name.
Keycloak is running with 3 instances, its resource spec includes:
extensions:
- [URL-to-jar]
Deployment was successful according to the logs of each pod - each log contains a message containing
Deployed "[jar-name].jar" (runtime-name : "[jar-name].jar")
However, in the admin console, I cannot select the theme from the extension for the login-theme. Creating a new realm via crd with a preconfigured login-theme via spec-entry
loginTheme: [themeName]
also does not work - in the admin-console, the selected entry for the login-theme is empty.
I may be missing something basic, but it seems like this ought to work according to this answer if I am not mistaken.
As is so often the case, an uncaught typo was the source of the error.
The directory-structure must not be
META-INF/keycloak-themes.json
themes/[theme-name]/[theme-role]/theme.properties
[...]
But instead
META-INF/keycloak-themes.json
theme/[theme-name]/[theme-role]/theme.properties
[...]
Given a correct structure, keycloak-operator can successfully deploy and load custom-themes as jar-extensions.

Kafka connector's task status are different when queried against different kafka connect nodes in a clustered enviroment

We have a 3 node Kafka connect cluster running version 5.5.4 in distributed mode. We are observing a strange issue regarding connector's task status.
The REST calls to node 1 and 2 are returning different results.
The first node returned this result:
{
"connector":{
"state":"RUNNING",
"worker_id":"x.com:8083"
},
"name":"connector",
"type":"source",
"tasks":[
]
}
Yes the task is empty where as the other node returned this result:
{
"connector":{
"state":"RUNNING",
"worker_id":"x.com:8083"
},
"name":"connector...",
"type":"source",
"tasks":[
{
"id":0,
"state":"RUNNING",
"worker_id":"x.com:8083"
}
]
}
As mentioned in this doc https://docs.confluent.io/home/connect/userguide.html#kconnect-internal-topics, I have configured group.id, config.storage.topic, offset.storage.topic and status.storage.topic with identical values in all 3 nodes.
I did go through connect-statuses-0 data directory and the file sizes for log, index and timestamp are all identical in node 1 and node 2. I don't know what is the .snapshot file but I see only one with root user/group in first node where as I see 2 of them in the 2nd node. One owned by root user/group and the other owned by our custom created user. Not sure this has anything to do with this problem.
Please guide me in identifying the root cause for this problem. If I do need to check any configuration, please let me know.

Can anyone help me with this error code in Data Fusion

I'm having a go at creating my first data fusion pipeline.
The data is going from Google Cloud Storage csv file to Big Query.
I have created the pipeline and carried out a preview run which was successful but after deployment trying to run resulted in error.
I pretty much accepted all the default settings apart from obviously configuring my source and destination.
Error from Log ...
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Required 'compute.firewalls.list' permission for
'projects/xxxxxxxxxxx'",
"reason" : "forbidden"
} ],
"message" : "Required 'compute.firewalls.list' permission for
'projects/xxxxxxxxxx'"
}
After deployment run fails
Do note that as a part of creating an instance, you must set up permissions [0]. The role "Cloud Data Fusion API Service Agent" must be granted to the exact service account, as specified in that document, which has an email address that begins with "cloud-datafusion-management-sa#...".
Doing so should resolve your issue.
[0] : https://cloud.google.com/data-fusion/docs/how-to/create-instance#setting_up_permissions

Can I update Windows ClientIdentities after cluster creation?

I currently have something like this in my cluseterConfig.json file.
"ClientIdentities": [
{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
}
]
My questions are:
My cluster is stood up and running. Can I add a second security group to this cluster while running? I've search through the powershell commands and didn't see one that matched this but I may have missed it?
If I can't do this while the cluster is running do I need delete the cluster and recreate? If I do need to recreate I'm zeroing in on the word ClientIdentities. I'm assuming I can have multiple identities and my config should look something like
ClientIdentities": [{
"Identity": "{My Domain}\\{My Security Group}",
"IsAdmin": true
},
{
"Identity": "{My Domain}\\{My Second Security Group}",
"IsAdmin": false
}
]
Thanks,
Greg
Yes, it is possible to update ClientIdentities once the cluster is up using a configuration upgrade.
Create a new JSON file with the added client identities.
Modify the clusterConfigurationVersion in the JSON config.
Run Start-ServiceFabricClusterConfigurationUpgrade -ClusterConfigPath "Path to new JSON"

Permission error when trying to use chrome.fileSystem chooseEntry

When trying to use
chrome.fileSystem.chooseEntry({
type: 'openFile'
},chooseEntryCallback)`
on Canary 28.0.1483.0 , I get the following error in the console:
chrome.fileSystem is not available: You do not have permission to access this API. Ensure that the required permission or manifest property is included in your manifest.json.
I only require read access, and this is how my permissions option in the manifest file looks like:
"permissions": [
{
"fileSystem": []
},
"contextMenus",
"clipboardWrite",
"storage"
],
This works fine with Stable 26.0.1410.64 so the question is whether there are some manifest permission changes which will need to be updated.
Note: Chrome running on Windows 8, and when opening the file via drag 'n drop it is opened without errors. So i'm guessing its some problem with chooseEntry ?
Based on #sowbug's comment I fixed this issue by changing the fileSystem permission to a list item:
"permissions": [
"fileSystem"
],
Edit:
To include the extended write permission:
"permissions": [
"fileSystem",
"fileSystem.write"
],