Data Bag Items must contain a Hash or Mash error - chef-solo

I am getting the following error when provisioning a chef node:
[2015-02-04T06:46:11-08:00] ERROR: Failed to load data bag item: "site1" "config"
==> default: Chef::Exceptions::ValidationFailed
==> default: ----------------------------------
==> default: Data Bag Items must contain a Hash or Mash!
I have verified that the data bag name config.json exists and there is an id within the data bag with the same name. Also the data bag path is set correctly in the Vagrant file.
config.json
{
"id": "config",
"username": "user",
"password": "pwd"
}
The JSON is valid.
UPDATE.
Issue on Chef client 12.0.1 and 12.0.3. Does not occur when downgrading to 11.18.0.

This is a known but unconfirmed bug. There is a github issue open for the problem.
For what it's worth, the problem only seems to show up with Vagrant. As you discovered, the workaround is to use an 11.x release.
edit: I guess it doesn't only happen on vagrant!

Related

Print editions using metaboss on Solana

I'm trying to create prints from a master edition (aka original edition) using from the console. The number of prints should be limited to a fixed number.
I followed this procedure :
Upload the image to Arweave : arloader upload image.jpg --with-sol --sol-keypair-path ~/.config/solana/id.json --ar-default-keypair --no-bundle.
Create the json file with NFT metadata :
{
"name": "name_of__the_collection",
"symbol": "token_of_the_collection",
"uri": "https://arweave.net/[arweave_img_tx_id]",
"seller_fee_basis_points": 0,
"creators": [
{
"address": "address_of_the_creator_of_the_collection",
"verified": false,
"share": 100
}
]
}
Mint the NFT :
metaboss mint one --keypair ~/.config/solana/id.json --nft-data-file ./metadata.json --max-editions='10'
Create the all the prints :
metaboss mint missing-editions --account address_of_the_creator_of_the_collection
I have two issues :
On solana explorer, I have an error : error loading image
The 4. command returns an error : Error: failed to get account data
What's wrong ?
[edit] Error 1 : I used uri key instead of the image in the metadata. That's why solana explorer couldn't find the image.
Generally the process is good. There are some details that have to be aligned though:
Regarding the missing image:
You have to upload the metadata JSON file, too. This is what you reference in the mint command.
Your metadata is not 100% valid. E.g. you are missing the properties field. Have a look into the Token Metadata docs for more details.
Regarding metaboss mint missing-editions:
The Account you specify with --account should not be the address of the creator of the collection but instead the Master Edition Address. (Master Edition is the NFT you minted in step 3)
Since the command runs a GPA call you should add --timeout 120 and use not use the default RPC. Otherwise you will not get results.
If it still does not work you can also run
metaboss mint editions --next-editions 9
Please let me know in case of any uncertainties.

Deploying custom Keycloak theme with Operator (v15.1.1 & v16.0.0)

I have a theme with a size >1MB (which precludes the configmap-solution provided as an answer to this question).
This theme has been been packaged according to the Server Development Guide - its folder structure is
META-INF/keycloak-themes.json
themes/[themeName]/login/login.ftl
themes/[themeName]/login/login-reset-password.ftl
themes/[themeName]/login/template.ftl
themes/[themeName]/login/template.html
themes/[themeName]/login/theme.properties
themes/[themeName]/login/messages/messages_de.properties
themes/[themeName]/login/messages/messages_en.properties
themes/[themeName]/login/resources/[...]
The contents of keycloak-themes.json are
{
"themes": [{
"name" : "[themeName]",
"types": [ "login" ]
}]
}
where [themeName] is my theme name.
Keycloak is running with 3 instances, its resource spec includes:
extensions:
- [URL-to-jar]
Deployment was successful according to the logs of each pod - each log contains a message containing
Deployed "[jar-name].jar" (runtime-name : "[jar-name].jar")
However, in the admin console, I cannot select the theme from the extension for the login-theme. Creating a new realm via crd with a preconfigured login-theme via spec-entry
loginTheme: [themeName]
also does not work - in the admin-console, the selected entry for the login-theme is empty.
I may be missing something basic, but it seems like this ought to work according to this answer if I am not mistaken.
As is so often the case, an uncaught typo was the source of the error.
The directory-structure must not be
META-INF/keycloak-themes.json
themes/[theme-name]/[theme-role]/theme.properties
[...]
But instead
META-INF/keycloak-themes.json
theme/[theme-name]/[theme-role]/theme.properties
[...]
Given a correct structure, keycloak-operator can successfully deploy and load custom-themes as jar-extensions.

Product images issue in vue storefornt

I integrated vue store front with magento 2, frontend works fine but product images not display in frontend. It throws error Unable to compile TypeScript:\nsrc/image/action/local/index.ts(27,18): error TS2339: Property 'query' does not exist on type 'Request<any, any, any, any>'. imagemagick is also installed and imgurl in local.json is also defined.
Anyone please know about this why error display.
It is about this.req which is typeof Request from express - it has query property. Please make sure you have yarn.lock from the original repo and reinstall dependencies.
If you are using docker, you might need to add:
- './yarn.lock/var/www/yarn.lock'
To volumes section in the docker-compose.nodejs.yml
i have found a simple solution you can try that
copy all your magento 2 pub/media data in vue-storefront-api/var/magento-folder/pub/media
Or
create a symlink if you are working on localhost
vue-storefront-api/config/local.json
"magento2": {
"imgUrl": "http://magento-domain/pub/media/catalog/product",
"assetPath": "/../var/magento-folder/pub/media",
}
vue-storefront/config/local.json
"images": {
"useExactUrlsNoProxy": false,
"baseUrl": "http://localhost:8080/img/",
"useSpecificImagePaths": false,
"paths": {
"product": "/catalog/product"
},
"productPlaceholder": "/assets/placeholder.jpg"
},
run command in vue-storefront and vue-storefront-api

Untrackable Error: Unrecognized field "notBefore" (class org.keycloak.representations.idm.UserRepresentation)

I hope you can help me out here. I need to relaunch a project again after the development got stopped about half a year ago. It consists out of different microservices. Mainly a Scala Swagger API based on the Play framework. But for securing their application and the users they are using keycloak.
While fetching the user profile the following internal server error occurs:
"javax.ws.rs.ProcessingException: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException:
Unrecognized field "notBefore" (class org.keycloak.representations.idm.UserRepresentation), not marked as ignorable
(24 known properties: "disableableCredentialTypes", "enabled", "emailVerified", "origin", "self", "applicationRoles", "createdTimestamp", "clientRoles", "groups", "username", "totp", "id", "email", "federationLink", "serviceAccountClientId", "lastName", "clientConsents", "socialLinks", "realmRoles", "attributes", "firstName", "credentials", "requiredActions", "federatedIdentities"])
at [Source: org.apache.http.conn.EofSensorInputStream#3101000d; line: 1, column: 369] (through reference chain: org.keycloak.representations.idm.UserRepresentation["notBefore"])"
At other endpoints I get a similiar error as this one, except there is another field missing in the definition.
The Keycloak server runs on version 3.4.0.Final and the dependecies are all to version 3.4.0.Final upgraded.
keycloak-services
keycloak-admin-client
keycloak-adapter-core
keycloak-authz-client
Can anyone help me out?
Thanks in advance
Solution
My docker-images did not pull the current version. So the dependencies stayed as the were in my running environment.

Logstash-Forwader 3.1 state file .logstash-forwarder not updating

I am having an issue with Logstash-forwarder 3.1.1 on Centos 6.5 where the state file /.logstash-forwarder is not updating as information is sent to Logstash.
I have found as activity is logged by logstash-forwarder the corresponding offset is not recorded in /.logstash-forwarder 'logrotate' file. The ./logstash-forwarder file is being recreated each time 100 events are recorded but not updated with data. I know the file has been recreated because I changed permissions to test, and permissions are reset each time.
Below are my configurations (With some actual data italicized/scrubbed):
Logstash-forwarder 3.1.1
Centos 6.5
/etc/logstash-forwarder
Note that the "paths" key does contain wildcards
{
"network": {
"servers": [ "*server*:*port*" ],
"timeout": 15,
"ssl ca": "/*path*/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/a/b/tomcat-*-*/logs/catalina.out"
],
"fields": { "type": "apache", "time_zone": "EST" }
}
]
}
Per logstash instructions for Centos 6.5 I have configured the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Below is the resting state of the /.logstash-forwarder logrotate file:
{"/a/b/tomcat-set-1/logs/catalina.out":{"source":"/a/b/tomcat-set-1/logs/catalina.out","offset":433564,"inode":*number1*,"device":*number2*},"/a/b/tomcat-set-2/logs/catalina.out":{"source":"/a/b/tomcat-set-2/logs/catalina.out","offset":18782151,"inode":*number3*,"device":*number4*}}
There are two sets of logs that this is capturing. The offset has stayed the same for 20 minutes while activities have been occurred and sent over to Logstash.
Can anyone give me any advice on how to fix this problem whether it be a configuration setting I missed or a bug?
Thank you!
After more research I found it was announced that Filebeats is the preferred forwarder of choice now. I even found a post by the owner of Logstash-Forwarder that the program is full of bugs and is not fully supported any more.
I have instead moved to Centos7 using the latest version of the ELK stack, using Filbeats as the forwarder. Things are going much smoother now!