I received nopass on the redhat RHCE test. Although it is not accurate, an error similar to the following occurred - redhat

`I received nopass on the redhat RHCE test. Although it is not accurate, an error similar to the following occurred.
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
[WARNING]: Could not match supplied host pattern, ignoring: [xxx],[xxx],[xxx]~~~
I received Install and configure Possible: 0% because of the above error.
I eventually put the inventory content in the /etc/hosts file and solved the problem.
An hour was wasted in solving the above problem, so I couldn't solve all the problems.
I'm going to try the test again in two weeks. If I don't solve this problem, I'm afraid I'll get the same result.
420 dollars is a lot of money for me, so I need your help.
I definitely inserted the /home/admin/ansible/inventory, /home/admin/ansible.cfg file normally.
###########
/home/username/ansible/inventory
[xxx]
10.10.10.10
/home/admin/ansible/ansible.cfg
inventory = /home/username/ansible/inventory
##This is an example of the file I wrote.
Although it was clearly written in the path presented in the test, the above error was repeated, so the /etc/hosts file was modified and proceeded
I need help`

Related

How to debug remote-cache write failures?

We're using Bazel (via Bazelisk) and set up a GCS bucket remote cache as documented. However when we run, it seems we regularly get BulkTransferExecptions during the Remote Cache writing phase:
> bazel build //... --sandbox_debug --verbose_failures
INFO: Invocation ID: fba91f67-788f-47cc-be4e-24f92ed11301
INFO: Analyzed 25 targets (74 packages loaded, 3245 targets configured).
INFO: Found 25 targets...
WARNING: Writing to Remote Cache:
BulkTransferException
INFO: Elapsed time: 17.115s, Critical Path: 15.47s
INFO: 16 processes: 16 worker.
INFO: Build completed successfully, 39 total actions
As far as I can tell, I have the appropriate access (Storage Object Admin).
I've been trying to get more information around that specific exception, but I've been unable to.
And if the bucket wasn't working, I'd expect an exception when reading from the cache: I'd seen such things when attempting other URLs to reach the bucket, such as domain storage.cloud.google.com instead of storage.googleapis.com.
Any and all advice to help debug what's going on here is welcome! The documentation is sparse on what to happen if you get exceptions, and as far as I can tell no results are uploading so no caching is actually occurring.
Update 2020/07/09
For some unknown reason, when we moved from one bucket to a more permanent planned on, it stopped occurring. So things work for us, and as far as we can tell the buckets were the same, so we don't know why it was failing initially.
You can use --verbose_failures, which will make it print out a longer stack trace. I just had a very similar problem, and figured out that my problem was due to insufficient permissions my service account had on my GCS bucket. I got this more helpful error message with --verbose_failures:
<?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Details>REDACTED#REDACTED.iam.gserviceaccount.com does not have storage.objects.delete access to REDACTED/cas/REDACTED.</Details></Error>
I had to read the source code the message came from. I'll try to submit a PR to add this hint to Bazel documentation: https://github.com/bazelbuild/bazel/pull/12945

Why does BitBake error if it can't find www.example.com?

BitBake fails for me because it can't find https://www.example.com.
My computer is an x86-64 running native Xubuntu 18.04. Network connection is via DSL. I'm using the latest versions of the OpenEmbedded/Yocto toolchain.
This is the response I get when I run BitBake:
$ bitbake -k core-image-sato
WARNING: Host distribution "ubuntu-18.04" has not been validated with this version of the build system; you may possibly experience unexpected failures. It is recommended that you use a tested distribution.
ERROR: OE-core's config sanity checker detected a potential misconfiguration.
Either fix the cause of this error or at your own risk disable the checker (see sanity.conf).
Following is the list of potential problems / advisories:
Fetcher failure for URL: 'https://www.example.com/'. URL https://www.example.com/ doesn't work.
Please ensure your host's network is configured correctly,
or set BB_NO_NETWORK = "1" to disable network access if
all required sources are on local disk.
Summary: There was 1 WARNING message shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
The networking issue, the reason why I can't access www.example.com, is a question for the SuperUser forum. My question here is, why does BitBake rely on the existence of www.example.com? What is it about that website that is so vital to BitBake's operation? Why does BitBake post an Error if it cannot find https://www.example.com?
At this time, I don't wish to set BB_NO_NETWORK = "1". I would rather understand and resolve the root cause of the problem first.
Modifying poky.conf didn't work for me (and from what I read, modifying anything under Poky is a no-no for a long term solution).
Modifying /conf/local.conf was the only solution that worked for me. Simply add one of the two options:
#check connectivity using google
CONNECTIVITY_CHECK_URIS = "https://www.google.com/"
#skip connectivity checks
CONNECTIVITY_CHECK_URIS = ""
This solution was originally found here.
For me, this appears to be a problem with my ISP (CenturyLink) not correctly resolving www.example.com. If I try to navigate to https://www.example.com in the browser address bar I just get taken to the ISP's "this is not a valid address" page.
Technically speaking, this isn't supposed to happen, but for whatever reason it does. I was able to work around this temporarily by modifying the CONNECTIVITY_CHECK_URIS in poky/meta-poky/conf/distro/poky.conf to something that actually resolves:
# The CONNECTIVITY_CHECK_URI's are used to test whether we can succesfully
# fetch from the network (and warn you if not). To disable the test set
# the variable to be empty.
# Git example url: git://git.yoctoproject.org/yocto-firewall-test;protocol=git;rev=master
CONNECTIVITY_CHECK_URIS ?= "https://www.google.com/"
See this commit for more insight and discussion on the addition of the www.example.com check. Not sure what the best long-term fix is, but the change above allowed me to build successfully.
If you want to resolve this issue without modifying poky.conf or local.conf or any of the files for that matter, just do:
$touch conf/sanity.conf
It is clearly written in meta/conf/sanity.conf that:
Expert users can confirm their sanity with "touch conf/sanity.conf"
If you don't want to execute this command on every session or build, you can comment out the line INHERIT += "sanity" from meta/conf/sanity.conf, so the file looks something like this:
Had same issue with Bell ISP when accessing example.com gave DNS error.
Solved by switching ISP's DNS IP to Google's DNS (to avoid making changes to configs):
https://developers.google.com/speed/public-dns/docs/using

Error while run command on terminal

When I run command "shimmercat devlove" it gives following error:-
Please find devlove.yaml file:- https://www.dropbox.com/s/6xhk3lq497zw8y2/devlove.yaml?dl=0
Good question #Atul Rai. I'm not completely sure, but I think it could be related with what is written here. It says:
...ShimmerCat probes the port to determine if the application is using HTTP/1.1 or FastCGI. The probing operation shows in ShimmerCat's logs, and may also show in the error log of the application.
Of course if you are not using an API domain (you don't have a subkey port on your devlove.yaml config file) I don't think that ShimmerCat has to do these probes, but I can't see another explanation for these error logs. Perhaps it would be useful if you update your question with your devlove.yaml file?

How to fix Rebol Cheyenne 404 with domain name and configuration file?

On Windows Server 2008 I created
reboltutorial.com [
root-dir %/www/
default [%index.html %index.rsp %index.php]
]
It returns 404 error page not found. Cheyenne only works with IP address ( http://88.191.118.45:2011/ ok http://reboltutorial.com ok also but on ISS 7).
How to fix this ?
Update: error log
Error in [conf-parser] : Can't access file www/ws-apps/ws-test-app.r
Error in [conf-parser] : Can't access file www/ws-apps/chat.r !
You have to make sure you have a directory named www in the map you installed cheyenne in. (Default dir %www/).
After that make sure the missing www/ws-apps/ws-test-app.r and www/ws-apps/chat.r files also exist.
First of all, HTTP 1.1 sends the full URL over the TCP session (including the domain-name typed on the Location: line). That's how one IP can serve multiple domains (Apache calls this VirtualHosts), so browsing by IP will be sending a different URL to whatever web server gets the request.
Thus it's not a great technical mystery for your machine to be set up in a way that it serves a different page for an IP address vs. a domain. But since you put "reboltutorial.com" in your Cheyenne config it seems that--if anything--that would be working while the IP address version would be failing.
I don't run Cheyenne, and you haven't offered up more details about your configuration. But since no one has answered I looked at the source tree to offer some advice on what you might try.
We know Cheyenne is getting the request and making the decision to hand back the 404, because of the format of the error. The Apache one looks different:
http://reboltutorial.com/show-me-apache-404/
http://88.191.118.45:2011/show-me-cheyenne-404/
So Cheyenne is getting the request. That much we know. The decision to serve up a 404 is made in send-response in the HTTPd.r file. It's a pretty simple test:
if all [file? out/content not exists? out/content][
log/error ["File not found: " mold out/content]
out/code: 404
out/content: none
]
If that's the place your 404 is being generated, then there should be a "File not found:" in your log and a mention of what file that is. If not, something strange is going on. You can throw something in there (even a quit if you're suspicious of the printed output) just to make sure it's getting to the line.
(FYI: In the future when you're looking at other Cheyenne problems, there is a is a setting called "verbosity" which affects the output and you can see in on-received in the HTTPd.r file that for verbosity > 0 it will log when it receives a request:
if verbose > 0 [
log/info ["================== NEW REQUEST =================="]
log/info ["Request Line=>" trim/tail to-string data]
]
If you bump up the verbosity level you might find an indication of the problem pretty quickly. If not, the code is fairly readable and you can put in your own trace points.)

Issue with Informix (ifx_connect)

Hi I have a problem after installed informix client sdk (Ref : http://www.debian-administration.org/article/651/Connect_to_Informix_using_PHP5_on_Lenny_x86_64)
OS : CentOS
Here is the .php file that i use to connect
$db_conn = ifx_connect("dbname#IPHost","user","pass");
There is some error here,
Warning: ifx_connect() [function.ifx-connect]: E [SQLSTATE=IX 001 SQLCODE=-1829] in /var/www/html/index.php on line 5
is anyone know the solution ?
Thanks
The way you find more about errors from Informix is often:
$ finderr -1829
-1829 Cannot open file citoxmsg.pam.
The file citoxmsg.pam is missing from the directory $INFORMIXDIR/msg.
If this error occurs, note all circumstances and contact IBM Technical Support.
$
(Give or take some blank lines.) The finderr command is found in $INFORMIXDIR/bin. You need $INFORMIXDIR set in the environment unless /usr/informix is correct - it could be a symlink to the actual software directory.
There are two possibilities:
You have not got INFORMIXDIR set in the environment when PHP is run, and/or the php.ini file does not define a value for $INFORMIXDIR, or the value is set incorrectly, or a default (quite possibly /usr/informix) is being used but the software is not installed there.
The installation is not complete - the relevant message file is missing as noted.
Of the two, I think reason 1 is much the more likely.
The IX001 value for SQLSTATE is of minimal use - it is the generic 'something went wrong with Informix' message. The SQLCODE is much more significant and helpful.