ERROR running force:package:version:create: Invalid character in header content ["getApiVersion"] - visual-studio-code

a. If the Dev Hub org doesn't have its "Unlocked and 2GP Packages" option enabled, you would get the error message Invalid character in header content ["getApiVersion"]. You may want to check that
b. This can be also caused due to lack of permissions on the user performing the operation. If you are using a Limited Access User Profile for performing the job and then make sure following permissions are part of your profile.
Create and Update Second-Generation Packages
Promote a package version to released
c. The error is might be also caused due to the custom code, please reach out to your internal developer to verify the same to investigate this further.

In my case it was the option c. Adding here to help the community

Related

OpenSearch 1.3 > 2.3 upgrade, CloudFormation fails on domain update

I recently updated our CDK code to move our OpenSearch cluster from version 1.3 to 2.3. The cluster itself seems to have upgraded to a healthy state and is still accessible / usable by our application, but CloudFormation failed when attempting to update our domain resource with:
Resource handler returned message: "Resource handler returned message: "Invalid request provided: DP Nodes are OOS, Tags operation is not allowed"
This kicked the stack into UPDATE_ROLLBACK_FAILED, which is not allowed. The cluster cannot be downgraded back to 1.3.
I'm struggling to find any information about this error it's kicking out and not quite sure how to resolve it to unblock the CloudFormation stack.
Things I have tried:
Digging through CloudWatch logs only revealed information pertaining to queries.
Forcing the rollback to occur without Domain resource. This got me back to an UPDATE_COMPLETE state, but each subsequent deploy of this stack will cause it to fail again since the core issue is not resolved.
This was an odd presentation of a permissions issue. As I was reading through some docs, I stumbled upon this section, which discusses changes to tag-based access control.
This lead me start looking into CloudTrail a bit and stumbled upon the exact error that was firing when this deploy happened. It was a little odd because the assumed role granted admin access to CloudFormation, but the last line of this event record caught my eye:
"sourceIPAddress": "cloudformation.amazonaws.com",
"userAgent": "cloudformation.amazonaws.com",
"errorCode": "ValidationException",
"errorMessage": "DP Nodes are OOS, Tags operation is not allowed",
"eventSource": "es.amazonaws.com",
Upon adding es.amazonaws.com to the trust relationship of that role, the deploy fully re-ran successfully.
Hopefully this helps someone else.

Kubeflow fails to deploy using both CLI and Console

I deleted my KF cluster last night to create a new one (using kubectl cluster command not Kfctl delete), and then when I tied to create a new one, it fails, it does not work with CLI not Console. I found other people have run into this issue before, for example (here and here)
"However, as I said even with CLI my deployment fails, the error from console is:
ailed to apply: (kubeflow.error): Code 500 with message: coordinator Apply failed for gcp: (kubeflow.error): Code 500 with message: gcp apply could not update deployment manager Error could not update storage-kubeflow.yaml; Insert deployment error: googleapi: Error 403: Request had insufficient authentication scopes.
More details:
Reason: insufficientPermissions, Message: Insufficient Permission"
and the error I get from Console is:
"Please enable APIs for your project and try again
Please enable cloud resource manager API: https://console.developers.google.com/apis/api/cloudresourcemanager.googleapis.com/ and iam API: https://console.developers.google.com/apis/api/iam.googleapis.com/"
Note that this error is wrong, all the apis are active already. I'm quite sure this is a bug of KF but not sure how to find a workaround, any thoughts?
With CLI, I'm using my own account which has "owner" privileges.
Thanks
It seems you have an issue with IAM and the installation of Kubeflow, a 3rd party product that itself is not supported by us; nevertheless I went ahead and dig some information about this Machine Learning product.
The main issues (and although it seems you already cover permissions) are permissions, number of projects and some fine grained points.
I was checking and found out the following things that may help
a) Troubleshooting Kubeflow 1
b) Deploying Kubeflow in GKE[2]
c) Kubleflow auto deployer for GKE[3]
There are also some discussion about a mismatch permissions setting in Kubeflow that may be worth reading [4]
Finally there is a group that, also on a best-effort basis due the nature of Kubeflow:"google-kubeflow-support#google.com" that may come in handy.
I trust this information will be useful for you to solve your issue

Invalid permissions after setting gcloud caching use_kaniko?

I encountered a strange permissions error while building Docker images on the cloud. I switched to another machine, installed Gcloud, did gcloud init and everything worked again.
However, I noticed while building images, it took much longer because I didn't enable kaniko cache (which I figured out from this post: gcloud rebuilds complete container but Dockerfile is the same, only the script has changed)
After enabling this feature, I tried to rebuild my last image and bam, the same error message:
Status: Downloaded newer image for gcr.io/kaniko-project/executor:latest
gcr.io/kaniko-project/executor:latest
error checking push permissions --
make sure you entered the correct tag name, and that you are authenticated correctly, and try again:
checking push permission for "eu.gcr.io/pipeline/tree-par": creating push check transport for eu.gcr.io failed:
GET https://eu.gcr.io/v2/token?scope=repository%3pipeline%2Ftree-par%3Apush%2Cpull&service=eu.gcr.io:
UNAUTHORIZED: You don't have the needed permissions to perform this operation, and you may have invalid credentials.
To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
ERROR
ERROR: build step 0 "gcr.io/kaniko-project/executor:latest" failed: step exited with non-zero status: 1
-------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build bad4a9a4-054d-4ad7-991d-e5aeae039b7c completed with status "FAILURE"
Anyone any idea why this failed upon enabling the Kaniko cache? I hate to not use it because when it still worked, it really decreased the time it took to create docker images.
It seems that the issue comes from Kaniko's end.
Three days ago, on version v0.21.0, they added this fix:
Fix: GCR credential helper check does not respect DOCKER_CONFIG environment variable
Even after this release, 1 day later, this issue was reported where users saw a very similar Error message:
"[...] You don't have the needed permissions to perform this operation, and you may have invalid credentials[...] "
This was already fixed yesterday with the release of the v0.22.0 version. The suggested workaround is to execute the following command:
gcr.io/kaniko-project/executor:v0.22.0
I would suggest use that command instead of executor:latest to "force" the use of the v0.22.0 version.
I hope this is helpful! :)

Why does BitBake error if it can't find www.example.com?

BitBake fails for me because it can't find https://www.example.com.
My computer is an x86-64 running native Xubuntu 18.04. Network connection is via DSL. I'm using the latest versions of the OpenEmbedded/Yocto toolchain.
This is the response I get when I run BitBake:
$ bitbake -k core-image-sato
WARNING: Host distribution "ubuntu-18.04" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.
ERROR: OE-core's config sanity checker detected a potential misconfiguration.
Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
Following is the list of potential problems / advisories:
Fetcher failure for URL: 'https://www.example.com/'. URL https://www.example.com/ doesn't work.
Please ensure your host's network is configured correctly,
or set BB_NO_NETWORK = "1" to disable network access if
all required sources are on local disk.
Summary: There was 1 WARNING message shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
The networking issue, the reason why I can't access www.example.com, is a question for the SuperUser forum. My question here is, why does BitBake rely on the existence of www.example.com? What is it about that website that is so vital to BitBake's operation? Why does BitBake post an Error if it cannot find https://www.example.com?
At this time, I don't wish to set BB_NO_NETWORK = "1". I would rather understand and resolve the root cause of the problem first.
Modifying poky.conf didn't work for me (and from what I read, modifying anything under Poky is a no-no for a long term solution).
Modifying /conf/local.conf was the only solution that worked for me. Simply add one of the two options:
#check connectivity using google
CONNECTIVITY_CHECK_URIS = "https://www.google.com/"
#skip connectivity checks
CONNECTIVITY_CHECK_URIS = ""
This solution was originally found here.
For me, this appears to be a problem with my ISP (CenturyLink) not correctly resolving www.example.com. If I try to navigate to https://www.example.com in the browser address bar I just get taken to the ISP's "this is not a valid address" page.
Technically speaking, this isn't supposed to happen, but for whatever reason it does. I was able to work around this temporarily by modifying the CONNECTIVITY_CHECK_URIS in poky/meta-poky/conf/distro/poky.conf to something that actually resolves:
# The CONNECTIVITY_CHECK_URI's are used to test whether we can succesfully
# fetch from the network (and warn you if not). To disable the test set
# the variable to be empty.
# Git example url: git://git.yoctoproject.org/yocto-firewall-test;protocol=git;rev=master
CONNECTIVITY_CHECK_URIS ?= "https://www.google.com/"
See this commit for more insight and discussion on the addition of the www.example.com check. Not sure what the best long-term fix is, but the change above allowed me to build successfully.
If you want to resolve this issue without modifying poky.conf or local.conf or any of the files for that matter, just do:
$touch conf/sanity.conf
It is clearly written in meta/conf/sanity.conf that:
Expert users can confirm their sanity with "touch conf/sanity.conf"
If you don't want to execute this command on every session or build, you can comment out the line INHERIT += "sanity" from meta/conf/sanity.conf, so the file looks something like this:
Had same issue with Bell ISP when accessing example.com gave DNS error.
Solved by switching ISP's DNS IP to Google's DNS (to avoid making changes to configs):
https://developers.google.com/speed/public-dns/docs/using

'User is missing the Overall/Read permission' error with Jenkins GitHub OAuth Plugin

I'm using the github oauth plugin for our logins but for all of our users in the Organisation I get an error:
Access Denied
<user> is missing the Overall/Read permission
I have tried everything I can possibly think of to try to make this work and I'm probably going to fallback to making everyone an admin user, which i would prefer not to do.
Any advise would be appreciated.
This is how I resolved the authentication problem:
Edit config.xml file, e.g.
sudo vi /var/lib/jenkins/config.xml
Change useSecurity element's value to false, e.g.
<useSecurity>false</useSecurity>
Remove authorizationStrategy block
Restart Jenkins: /etc/init.d/jenkins restart.
Access Jenkins through URL as usual and reconfigure security again.
I had the same problem with "... is missing the Overall/Read permission" on Jenkins (1.651.2) with activated Credentials Plugin.
But it was my own failure: I only configured the user on project side (by credential plugin) but missed to configure the global security.
So I fixed it by selecting:
Jenkins -> Manage Jenkins -> Configure Global Security
And did setup missing global settings (or project matrix based one)
Have you followed this step, from the plugin page?
Control user authorization (i.e. who is allowed to see the jobs and build them) using the Github Commiter Authorization Strategy
Also, make sure you actually allow authenticated users to access Jenkins
Under Jenkins global configuration, under Authorization, add user/group called authenticated
Give that group Overall Read permission
The group should show up with a "group" icon (two users), as opposed to single user icon.
reset from <useSecurity>true</useSecurity> to <useSecurity>false</useSecurity> in config.xml and set the permission again.
Edit file /var/lib/jenkins/config.xml and add the following lines :
<authorizationStrategy class="hudson.security.ProjectMatrixAuthorizationStrategy">
<permission>hudson.model.Hudson.Read:john.smith</permission>
</authorizationStrategy>
Restart Jenkins
What I did when I got this error is to edit config.xml as mentionned by other users and correctly re-add my username in LOWERCASE in "configureSecurity" Jenkins' page. I was using "KrustyHack" when adding permissions but it didn't work. I had to add "krustyhack" instead, and it worked.
I hope it helps.
I had the same problem here, but it affected only some users, not all of them. Anyway, you should check public organization membership : documentation of the plugin states that "You have to be a public member of the organization for the authorization to work correctly." (https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin).
Follow instructions from GitHub (https://help.github.com/articles/publicizing-or-hiding-organization-membership/) in order to make organization membership public, and this might fix your issue.
Also check the case user names in the authorizationStrategy element. I made my new user's name lower case and restarted the service and it the error went away.
Fix it by these 2 shell commands on the server (sudo permission is required):
sudo ex +g/useSecurity/d +g/authorizationStrategy/d -scwq /var/lib/jenkins/config.xml
sudo /etc/init.d/jenkins restart
This will remove useSecurity and authorizationStrategy lines from your config file.
See also: Disable security at Jenkins website
We hit this same error when a github organization administrator changed the organization's settings for "Third-party access" to "restrict third-party application access". Reverting to the previous settings within the github organization resolved the problem.
See github oauth-app-access-restrictions for details on how to configure that properly.
The assignment of roles to users is stored in config.xml file. Add the ID of the user directly to the role and then restart Jenkins.
In my case, I have a role named editor and a bunch of users assigned to the role.
<role name="editor" pattern=".*">
<permissions>
<...>
<permission>hudson.model.Item.Create</permission>
<permission>hudson.model.Item.Workspace</permission>
<permission>...</permission>
</permissions>
<assignedSIDs>
<sid>bob</sid>
<sid>alice</sid>
<sid>**newuser**</sid>
</assignedSIDs>
</role>
The matrix security is not terribly clear. I am a member of a specific group in our org that has admin privileges however I am also an authenticated user. I would think that the one group super-cedes the other however I have to have both in order to actually log into the system and be admin. It's screwed up IMO.
go to your-jenkins-host:port/role-strategy/assign-roles and configure roles for the user
I had the same problem before,
your OAuth application need your organization owner's approve,
then the OAuth Plugin can access the private data in it
I am using Crowd 2 plugin and I have the same problem.
I fixed it by downgrading OWASP Markup Formatter Plugin from varsion 1.2 to version 1.1 and then changing Markup Formatter in Configure Global Security value to Raw HTML, before it was Plain text.
I had exactly the same problem and adding the plugin Role Strategy Plugin fixed the problem.
All I had to do was install the plugin, create two groups - admin / developer and then add users to the groups.
A much much better solution than recreating the whole permissions matrix :)
I had a similar problem I was not able access Jenkin account and the system was locked.
I had only an error message. "Access Denied "
When I tried to reinstall Jenkins then it prompted to Repair option.
By clicking Repair option it fixed the problem.
Go to $JENKINS_HOME (linux, jenkins in windows), and find config.xml file.
Open this file in the editor. (take backup of .jenkins home)
Look for the <useSecurity>true</useSecurity> element in this file.
Replace "true" with "false"
Remove the elements authorizationStrategy and securityRealm
Start Jenkins
I found it in
C:\ProgramData\Jenkins\.jenkins
Jenkins Version: 2.319.2
Instead of removing all security (the top answer), Add admin access as root, to the user you want to give admin to. We had the same issue where all admins were no longer with the company. This is how I resolved the authentication problem: I logged into:
jenkins#<jenkins server>:/var/lib/jenkins/
Edit config.xml file, and add a config for
<permission>hudson.model.Hudson.Administer:<username></permission>
Then restart Jenkins:
root#<jenkins server>:/$ /etc/init.d/jenkins restart
Just use Jenkins > Configure Global Security bottom page matrix to provide permissions to the user (start w/ read)
I edited the /var/lib/jenkins/config.xml file and replaced the
<authorizationStrategy>...</authorizationStrategy>
with
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
It is the default settings after installation. Then restart the jenkins service.