I been trying to install kubernetes for the first time, after the initial setup, I was finally able to execute the kube-up.bash script:
kubernetes/cluster/kube-up.bash
Everything goes well and I even see the list of cluster services installed:
Then the book I am checking says:
"Go to: https//your_master_ip/ui/"
But when I try I can only see the following:
I assumed I did not performed a proper setup on during the auth process, so I did:
gcloud auth list
But my active account is there with my Google email, so I am not sure what I am doing wrong.
I am able to access the Google Cloud Platfrom and see the project I created for this, I can see the traffic also.
Also, the kubectl commands are not working on my system, it throws a bash error that It was not able to locate the order.
Can someone please assist me?
Regards
Related
I am using a downloaded JSON file containing service account keys, instead of ADC, with code running on my local developer machine and communicating with live GCP Firestore.
After adding a service account to a role, in my case roles/datastore.user, do I have to do anything before it takes effect?
E.g. wait 15 minutes, redownload the JSON, restart some services, something else?
Question relates to this error in automated tests running on my machine.
Test method MyProject.Data.Repositories.FirestoreRepositoryTests.FirestoreAccountDocRepository_UpdateAsync__updates threw exception:
Grpc.Core.RpcException: Status(StatusCode="PermissionDenied", Detail="Permission denied on resource project my-project-prodlike.", DebugException="Grpc.Core.Internal.CoreErrorDetailException: {"created":"#1642697226.430711000","description":"Error received from peer ipv4:172.217.169.74:443","file":"/Users/einari/Projects/grpc/grpc/src/core/lib/surface/call.cc","file_line":1074,"grpc_message":"Permission denied on resource project my-project-prodlike.","grpc_status":7}")
Note - I'm using Contrib.Grpc.Core.M1 since I'm on new MacBook.
Note - I'm no longer using the above and now using Google's workaround GRPC lib adapter, just in case. See https://github.com/googleapis/google-cloud-dotnet/issues/7560#issuecomment-975414370.
The permission denied problem was being caused by an incorrect project name (and not permission actually being denied).
At the top of the Google Cloud Console is the name of the current project. However, that's actually just a pointless alias, the real project identifier is not displayed by default, though it is in the URL in the browser.
Of course, the error message implies it found its target resource and it denied access.
I'm so tired.
I am trying to configure or setup the production environment of whatsapp business api as mentioned in the link https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
I have done everything mentioned in this my dockers are also running on port:9090 as can be seen in the image
still I can't access it. Whenever I try to call https://localhost:9090 the error with "This site can’t be reached" occurs. Whatsapp business api does not have good documentation or tutorials till now. So this site is the only last way for me.
I had a similar problem which could be your case, I saw the docker containers OK but nothing was working. After a day searching I saw where it happened, my problem was I installed mysql MANUALLY (not docker container) in the same instance where docker is running and in db.env I just used 127.0.0.1, this was passed literally to docker container, then looking at a the wait_on_mysql.sh script, the whastapp docker containers were waiting util the mysql ip has conectivity to actually do something and was printing "MySQL is not up yet - sleeping" each second, of course they wouldn't find any conectivity.
Since my instalation is for development, and I am already using such database to other stuff, my solution was to use the 172.17.0.1(docker gateway of the containers) IP instead, then add two sets of network iptables rules to the host to redirect from the docker containers IP to the IP binded by mysql when using such port (3306, the default in my case). After that everything works well. I think there are better solutions, but I didn't want to go far on it, you should evaluate you case if apply.
check the command:
docker-compose logs > debug_output.txt
That gives you insight about whats happening, hope it can helps someone.
I think your setup is already complete. You just need to start with the registration process and start sending messages. The containers are up and running but calling https://localhost:9090 won't send you any response as this is not any specified API endpoint expected to be used.
Since you're using prod single instance, the documentation can be found here which seems pretty straight forward. https://developers.facebook.com/docs/whatsapp/installation/prod-single-instance
You seem to have completed till the 7 steps. The next step can be to perform a health check to make sure it is healthy. The API endpoint for that would be https://localhost:9090/v1/health https://developers.facebook.com/docs/whatsapp/api/health
Has your db also been setup?
I cannot see it in the docker screenshot.
Also - you have to accept the certificate, as it does not have a public CA issues certificate.
I am trying to follow tutorial of Kubernetes but I am kinda lost on first steps when trying to use Katacoda... When I just try to open minikube dashboard I encounter error:
failed to open browser: exec: "xdg-open": executable file not found in $PATH
and dashboard itself remains unavailable when I try to open it through host 1.
Later steps like running hello-world work fine and I am able to run it locally using my own minikube instance but I am a bit confused with this issue. Can I debug it somehow to access dashboard during course? This is particularly confusing because I am a bit afraid that I might encounter same or similar issue during potential exam that also runs online...
Founder of Katacoda here. When running locally, then xdg provides the wrapper for opening processes on your local machine and installing the package would resolve the issue. As Katacoda runs everything within a sandbox, we cannot launch processes directly on your machine.
We have added an override for xdg-open that displays a friendly error message to users. They'll now be prompted to use the Preview Port link provided. The output is now:
$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening %s in your default browser...
Minikube Dashboard is not supported via the interactive terminal experience.
Please click the 'Preview Port 30000' link above to access the dashboard.
This will now exit. Please continue with the rest of the tutorial.
X failed to open browser: exit status 1
Looks like this command works:
apt install xdg-utils
I have been following the same tutorial in Katacoda and had the same issue.In my case, using these commands helpt me to solve the problem :
apt-get update
apt install xdg-utils
I am trying to run Vitess on Minikube and I'm going through the 'Getting Started' steps found here: http://vitess.io/getting-started/#set-up-google-compute-engine-container-engine-and-cloud-tools
I have installed everything I need to including 'vtctlclient'. I have verified that all the correct directories were created when I did this.
However, there is a script in my directory '/go/src/github.com/youtube/vitess/examples/kubernetes' called 'kvtctl.sh' which uses kubectl to discover the pod name and set up the tunnel and then runs 'vtctlclient'. When I run this script, this is what is returned:
'Starting port forwarding to vtctld...
./kvtctl.sh: line 29: vtctlclient: command not found'
I am totally lost as to why the vtctlclient command is not found because I just installed it using Go.
Any help on this matter would be much appreciated.
Maybe the go install directory is not in your path. Have you tried running vtctlclient manually (just like kvtctl.sh does)?
PS: You may want to join our Vitess Slack channel where you may get more prompt answers for your questions. Let me know if you need an invite.
I'm trying to automate some gsutils commands, but struggling to see where the authentication files are kept and how to re-use (if thats what happens).
I've gone through the gcloud init process in bash...
curl https://sdk.cloud.google.com | bash
gcloud init
All works well when I run
'gsutil ls'
Now I'm trying to automate the process, so this would work on a new server adding into a crontab on it (rather than creating a new config each time).
I saw a mention of setting env variable GOOGLE_APPLICATION_CREDENTIALS, so I copied my credentials from web login to a file and tried it, eg trying as a different user to test
export GOOGLE_APPLICATION_CREDENTIALS=/home/user/.gsutil/mycreds
and then gsutil ls, but fails.
So I assume I've got the whole credentials thing a bit wrong. I'm assuming there is a file somewhere that was originally created by gcloud which I could use, but I can't see it anywhere ?
I've looked at the answer here but doesn't seem up to date now, as per last comment.
Edit: I have followed Zacharys steps, gcloud auth activate-service-account --key-file=myfilelocation
However, with 'gsutil ls' I now get..
You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
So my next question would be, where is it looking for the project id ? If I run gsutil config, it seems to create a new set of auth which then creates another error, so have removed that.
You should be able to do this without diving in too deep to the implementation of authentication for gsutil.
If you're using standalone gsutil (if you installed via this method), the instructions in the linked question are still valid (as Travis points out).
If you'd like to continue using the gsutil supplied via the Cloud SDK, you should use service accounts. Service accounts are the preferred method of authenticating on headless machines or in non-interactive contexts.
Your flow would look something like the following:
Create a service account via the Google Cloud Developers Console.
On the remote machine, install the Cloud SDK and gsutil. If you're not installing interactively, it's better to skip the curl ... | bash method. Instead, download this install archive, extract it, and run the install.sh script. This script has options (visible with --help); if you specify choices to all of these options, it won't prompt you.
Copy the service account to the remote machine. Run gcloud auth activate-service-account --key-file=/path/to/service-account.json.
Run gsutil. You should be appropriately authenticated.
You have to set default project and user in gsutil. Run the following command:
gcloud init
Choose 1. It shows you different users; select the user and then select the project.
I was trying to create a bucket with project id as name:
$ gsutil mb -l eu gs://PROJECT-ID
Creating gs://root****/...
Error: You are attempting to perform an operation that requires a project id, with none configured. Please re-run gsutil config and make sure to follow the instructions for finding and entering your default project id.
Steps that resolved for me:
gcloud auth login
gcloud config set project <PROJECT-ID>
gsutil mb -l eu gs://<PROJECT-ID>
Creating gs://root***/...
The error is gone out of the way and it works as expected.