Title is self explanatory. I simply downloaded the flexible-hello-world app and deployed it without (almost) a single modification - I deployed it to a service called some-service using this app.yaml
# Copyright 2017, Google, Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# [START gae_flex_quickstart_yaml]
runtime: nodejs
env: flex
service: some-service
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/nodejs/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
# [END gae_flex_quickstart_yaml]
I have a billing account enabled for my project.
It hangs at this line:
...
7104cb4c0c814fa53787009 size: 2385
Finished Step #1
PUSH
DONE
--------------------------------------------------------------------------------------------------------------------------------------------------------------
Updating service [some-service] (this may take several minutes)...⠹
When I go to the app engine console I see it has deployed and can access the URL here: https://some-service-dot-ashored-cloud-dv.uk.r.appspot.com/
But the 502 never goes away.
Help!
EDIT:
Some more information, while it is still hung on the deploy command, I run this in another terminal:
gcloud app instances list --service some-service
and I get:
SERVICE VERSION ID VM_STATUS DEBUG_MODE
some-service 20201119t184828 aef-some--service-20201119t184828-fr7n TERMINATED
EDIT 2:
When I try to ssh to it I get more weirdness:
gcloud app instances ssh aef-some--service-20201119t184828-fr7n --service some-service --version 20201119t184828
WARNING: This instance is serving live application traffic. Any changes made could
result in downtime or unintended consequences.
Do you want to continue (Y/n)?
Sending public key to instance [apps/ashored-cloud-dv/services/some-service/versions/20201119t184828/instances/aef-some--service-20201119t184828-fr7n].
Waiting for operation [apps/ashored-cloud-dv/operations/9de7b298-f4e9-47a7-8a8e-11411e649d50] to complete...done.
ERROR: gcloud crashed (TypeError): can only concatenate str (not "NoneType") to str
EDIT 2:
glcloud --version output:
Google Cloud SDK 319.0.0
bq 2.0.62
cloud-build-local
core 2020.11.13
gsutil 4.55
tl;dr; permissions
I removed the editor permission from my default app engine service account as recommended in the IAM dashboard.
Nowhere in the docs (that I could find) does it tell you what permissions are needed to deploy a app engine flexible service.
Turns out, you need:
Logs Writer
Storage Object Viewer
Without Storage Object Viewer you'll get an error on deployment telling you the exact issue. Without Logs Writer you will not get an error, but the service will never come up.
What a long 10 days...
EDIT: I was wrong, it says here in the docs what permissions you need.
I asked Google Support to file an internal bug that the correct error message is not being returned if you do not have Logs Writer
I'm attempting to write a script that checks whether my Kerberos tickets are valid or expiring soon. To do this, I use klist --json or klist to produce a list of currently active tickets (depending on version of Kerberos installed), then I parse the results with regular expressions or JSON.
The end result is that I get a list of tickets that looks like this:
Issued Expires Principal
Aug 19 16:44:51 2020 Aug 22 14:16:55 2020 krbtgt/EXAMPLE.COM#EXAMPLE.COM
Aug 20 09:05:06 2020 Aug 20 19:05:06 2020 ldap/abc-dc101.example.com#EXAMPLE.COM
Aug 20 09:32:18 2020 Aug 20 19:32:18 2020 krbtgt/DEV.EXAMPLE.COM#EXAMPLE.COM
With a little bit of work, I can parse these results and verify them. However I'm curious whether it's ever possible that Kerberos will have two tickets from the same principal. Reading the MIT page on Kerberos usage it seems like there is only ever one ticket that would be the "initial" ticket.
Can I rely on uniqueness by principal, or do I need to check for the possibility of multiple tickets from the same principal?
It's a bit more complicated than that.
TL;DR Your 2nd TGT seems related to cross-realm authentication, see below in bold.
klist shows the tickets that are present in the default system cache:
error message if there is no such cache to query (i.e. FILE cache that does not exist, KEYRING kernel service not started, etc)
possibly 1 TGT (Ticket Granting Ticket) that asserts your identity in your own realm
possibly N service tickets that assert you are entitled to contact service X on server Z (which may belong to another realm, see below)
in the case of cross-realm authentication, some intermediate tickets that allow you to convert your TGT in realm A.R to a TGT in realm R that allows you to get a service ticket in realm B.R (that would be the default, hierarchical path used with e.g. Active Directory but custom paths may be defined in /etc/krb5.conf under [capath] or sthg like that, depending on the trusts defined between realms)
But note that not all service tickets are stored in the cache -- it is legit for an app to get the TGT from the cache, get a service ticket, and keep it private in memory. That's what Java does.
And it is legit for an app (or group of apps) to use a private cache, cf. env variable KRB5CCNAME (pretty useful when you have multiple services running under the same Linux account and don't want to mix up their SPN) so you can't see their tickets with klist unless you tap this custom cache explicitly.
And it is legit for an app to not use the cache at all, and keep all its tickets private in memory. That's what Java does when provided with a custom JAAS config that mandates to authenticate with principal/keytab.
I'm a new one just start to learn and install RabbitMQ on Windows System.
I install Erlang VM and RabbitMQ in custom folder, not default folder (Both of them).
Then I have restarted my computer.
By the way,My Computer name is "NULL"
I cd to the RabbitMQ/sbin folder and use command:
rabbitmqctl status
But the return message is:
Status of node rabbit#NULL ...
Error: unable to perform an operation on node 'rabbit#NULL'.
Please see diagnostics information and suggestions below.
Most common reasons for this are:
Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
Target node is not running
In addition to the diagnostics info below:
See the CLI, clustering and networking guides on http://rabbitmq.com/documentation.html to learn more
Consult server logs on node rabbit#NULL
DIAGNOSTICS
attempted to contact: [rabbit#NULL]
rabbit#NULL:
connected to epmd (port 4369) on NULL
epmd reports node 'rabbit' uses port 25672 for inter-node and CLI tool traffic
TCP connection succeeded but Erlang distribution failed
Authentication failed (rejected by the remote node), please check the Erlang cookie
Current node details:
node name: rabbitmqcli70#NULL
effective user's home directory: C:\Users\Jerry Song
Erlang cookie hash: 51gvGHZpn0gIK86cfiS7vp==
I have try to RESTART RabbitMQ, What I get is:
ERROR: node with name "rabbit" already running on "NULL"
By the way,My Computer name is "NULL"
And I have enable all ports in firewall.
https://groups.google.com/forum/#!topic/rabbitmq-users/a6sqrAUX_Fg
describes the problem where there is a cookie mismatch on a fresh installation of Rabbit MQ. The easy solution on windows is to synchronize the cookies
Also described here: http://www.rabbitmq.com/clustering.html#erlang-cookie
Ensure cookies are synchronized across 1, 2 and Optionally 3 below
%HOMEDRIVE%%HOMEPATH%\.erlang.cookie (usually C:\Users\%USERNAME%\.erlang.cookie for user %USERNAME%) if both the HOMEDRIVE and HOMEPATH environment variables are set
%USERPROFILE%\.erlang.cookie (usually C:\Users\%USERNAME%\.erlang.cookie) if HOMEDRIVE and HOMEPATH are not both set
For the RabbitMQ Windows service - %USERPROFILE%\.erlang.cookie (usually C:\WINDOWS\system32\config\systemprofile)
The cookie file used by the Windows service account and the user running CLI tools must be synchronized by copying the one from C:\WINDOWS\system32\config\systemprofile folder.
If you are using dedicated drive folder locations for your development tools/software in Windows10(Not the windows default location), one way you can synchronize the erlang cookie as described by https://www.rabbitmq.com/cli.html is by copying the cookie as explained below.
Please note in my case HOMEDRIVE and HOMEPATH environment variables both are not set.
After copying the "C:\Windows\system32\config\systemprofile\.erlang.cookie" to "C:\Users\%USERNAME%\.erlang.cookie" ,
the error "tcp connection succeeded but Erlang distribution failed" is resolved.
Now I am able to use "rabbitmqctl.bat status" command successfully. Hence there is no mandatory need to install in default location to resolve this error as synchronizing cookie will resolve that error.
In my case similar issue (Authentication failed because of Erlang cookies mismatch) solved by copying .erlang.cookie file from Windows system dir - C:\Windows\system32\config\systemprofile\.erlang.cookie to %HOMEDRIVE%%HOMEPATH%\.erlang.cookie (where %HOMEDRIVE% was set to H: and %HOMEPATH% to \ respectively)
Quick setup TODO for Windows, Erlang OTP 24 and RabbitMQ 3.8.19:
Download & Install Erlang [OTP 24] (needs Admin rights) from:
https://www.erlang.org/downloads
set ERLANG_HOME (should point to install dir)
Download & Install recent [3.8.19] RabbitMQ (needs Admin rights) from:
https://github.com/rabbitmq/rabbitmq-server/releases/
Follow: https://www.rabbitmq.com/install-windows.html and/or
https://www.rabbitmq.com/install-windows-manual.html
set RABBITMQ_SERVER (should point to install dir)
update %PATH% by adding: ;%RABBITMQ_SERVER%\sbin
Fix Erlang-cookie issue from above, follow: https://www.rabbitmq.com/cli.html#erlang-cookie
Enable Web UI by running command: %RABBITMQ_SERVER%/sbin/rabbitmq-plugins.bat enable rabbitmq_management
From item #8 (above) got error because of missing file: %USERPROFILEDIR%/AppData/Roaming/RabbitMQ/enabled_plugins -> had to create it and run %RABBITMQ_SERVER%/sbin/rabbitmq-plugins.bat enable rabbitmq_management again!
Run/restart on the way might be required
Finally, login to: http://localhost:15672/ (guest:guest)
, or check by cURL:
curl -i -u guest:guest http://localhost:15672/api/vhosts
should receive response like:
HTTP/1.1 200 OK
cache-control: no-cache
content-length: 186
content-security-policy: script-src 'self' 'unsafe-eval' 'unsafe-inline';
object-src 'self'
content-type: application/json
date: Tue, 13 Jul 2021 11:21:12 GMT
server: Cowboy
vary: accept, accept-encoding, origin
[{"cluster_state":{"rabbit#hostname":"running"},"description":"Default virtual host","metadata":{"description":"Default virtual host","tags":[]},"name":"/","tags":[],"tracing":false}]
P.S. Some useful RabbitMQ CLI commands (copy-paste):
%RABBITMQ_SERVER%/sbin/rabbitmqctl start_app
%RABBITMQ_SERVER%/sbin/rabbitmqctl stop_app
%RABBITMQ_SERVER%/sbin/rabbitmqctl status
P.P.S. UPDATE: great article for this subject: https://www.journaldev.com/11655/spring-rabbitmq
I have reinstalled the RabbitMQ in my computer by using default setup folder
Then checked with the command :
rabbitmqctl status
It works now, not the problem of Erlang VM .(Means Er can install at another folder)
It will cause some problem (like this one) that I couldn't find out now if we don't use the RabbitMQ default setup require folder (C:\Program Files\RabbitMQ Server)
If anyone finds it out, I hope you can tell me why and how to fix.
How I resolved mine
It's mostly caused by cookie mismatch on a fresh installation of Rabbit MQ
follow this 2 steps
1. copy the .erlang.cookie file from C:\Windows\System32\config\systemprofile paste it into
C:\Users\["your user nameusername"] folder
2. run rabbitmq-service.bat stop and rabbitmq-service.bat start
Done it should work now when you run 'rabbitmqctl start_app' good luck.
note if you have more than one user put it in the correct user folder
In Centos.
add ip nodename pair to /etc/hosts on each node.
restart rabbitmq-server service on each slave node.
works for me.
i got error like this, i just stop my rabbitMQ with close port 25672
here syntax for linux:
kill -9 $(lsof -t -i:25672)
Error Images:
Just adding my experience if it helps others down the line.
I wrote a Powershell .ps1 script to install and configure RabbitMQ which would be used as one of the stept to provision a server with Packer.
I wrote the code on a fresh AWS W2016 Server build. It worked fine when run on the box (as administrator, from an admin PS console) but when the same code was moved over to the Packer build server, it would fall over when doing the rabbitmqctl.bat configuration steps via packer, despite both using (as far as I can tell) Administrator to run the scripts.
So this worked on the coding box:
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" add_user Username Password}
Invoke-Command -ScriptBlock $pathvargs
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" set_user_tags User administrator}
Invoke-Command -ScriptBlock $pathvargs
$pathvargs = {cmd.exe /c "rabbitmqctl.bat" set_permissions -p "/" User "^User-.*" ".*" ".*"}
Invoke-Command -ScriptBlock $pathvargs
Write-Host "Did RabbitMQ"
But I had to prelude this with...
copy "C:\Windows\system32\config\systemprofile\.erlang.cookie" "C:\Program Files\RabbitMQ Server\rabbitmq_server-3.7.17\sbin\.erlang.cookie"
copy "C:\Windows\system32\config\systemprofile\.erlang.cookie" $env:userprofile\.erlang.cookie -force
... On the Packer box.
I am guessing there is some context issue going on but I'm using
"winrm_username": "Administrator",
in the Packer builders block, so I thought this would suffice.
TL;DR - Use the Cookie even though it works without it in some instances.
I have encountered the same error after installing Erlang VM and RabbitMQ using the default installation folders in Windows 10. Managed to start the management and access it via HTTP, but status failed with this error.
The cookie was fine in all folders (%HOMEDRIVE%%HOMEPATH%, %USERPROFILE%, C:\WINDOWS\system32\config\systemprofile).
I had to perform a restart the Windows to make it work. After restart it set up something to run at startup + asked permission to make an exception in the firewall.
In my case, the file was at c:\\Windows\.erlang.cookie, just copied it to C:\Users{USERNAME} and all works, thanks to everyone for the hits
Another thing to check after making sure the cookie file is in all the locations.. is to realize that you installed 32 bit Erlang.. not 64..
Happened to me. Removed 32 bit Erlang and Installed 64 and rabbitmqctl status returns expected results.
New to streamsets. Following the documentation tutorial, was getting
FileNotFound: ... HADOOPFS_14 ... (permission denied)
error when trying to set the destination location as a local FS directory and preview the pipeline (basically saying either the file can't be accessed or does not exist), yet the permissions for the directory in question are drwxrwxr-x. 2 mapr mapr. Eventually found workaround by setting the destination folder permissions to be publicly writable ($chmod o+w /path/to/dir). Yet, the user that started the sdc service (while I was following the installation instructions) should have had write permissions on that directory (was root).
I set the sdc user env. vars. to use the name "mapr" (the owner of the directories I'm trying to access), so why did I get rejected? What is happening here when I set the env. vars. for sdc (because it does not seem to be doing anything)?
This is a snippet of what my /opt/streamsets-datacollector/libexec/sdcd-env.sh file looks like:
# user that will run the data collector, it must exist in the system
#
export SDC_USER=mapr
# group of the user that will run the data collector, it must exist in the system
#
export SDC_GROUP=mapr
So my question is, what determines the permissions for the sdc service (which I assume is what is being used to access FS locations by the streamsets web UI)? Any explaination or links to specific documentation would be appreciated. Thanks.
Looking at the command ps -ef | grep sdc to examine who the system thinks the owner of the sdc process really is, found that it was listed as:
sdc 36438 36216 2 09:04 ? 00:01:28 /usr/bin/java -classpath /opt/streamsets-datacollector
So it seems that editing sdcd-env.sh did not have any effect. What did work was editing the /usr/lib/systemd/system/sdc.service file to look like (notice that have set user and group to be the user that owns the directories to be used in the streamsets pipeline):
[Unit]
Description=StreamSets Data Collector (SDC)
[Service]
User=mapr
Group=mapr
LimitNOFILE=32768
Environment=SDC_CONF=/etc/sdc
Environment=SDC_HOME=/opt/streamsets-datacollector
Environment=SDC_LOG=/var/log/sdc
Environment=SDC_DATA=/var/lib/sdc
ExecStart=/opt/streamsets-datacollector/bin/streamsets dc -verbose
TimeoutSec=60
Then restarting the sdc service (with systemctl start sdc, on centos 7) showed:
mapr 157013 156955 83 10:38 ? 00:01:08 /usr/bin/java -classpath /opt/streamsets-datacollector...
and was able to validate and run pipelines with origins and destinations on local FS that are owned by the user and group set in the sdc.service file.
* NOTE: the specific directories used in the initial post are hadoop-mapr directories mounted via NFS (mapr 6.0) (though the fact that they are NFS should mean that this solution should apply generally) hosted on nodes running centos 7.
I have an HAProxy install which was configured by someone who left the company. It runs on Ubuntu 10.04 and it seems to use 3 configuration files in the directory /etc/haproxy
haproxy.cfg
haproxy.http.cfg
haproxy.https.cfg
I don't see the point in using the haproxy.https.cfg file as I believe (in our configuration) it can all be configured from a single haproxy.http.cfg file but when I remove that httpS file it complains bitterly and refuses to run. My question
Is this the standard configuration haproxy uses or if not, I can't find a reference to the "S" file anywhere. Can anyone suggest how HAProxy concludes it should use it?
Thanks
The very answer to your question: your haproxy is simply launched with those three config files ( -f haproxy.cfg -f haproxy.http.cfg -f haproxy.https.cfg, maybe from /etc/init.d/haproxy but mileage varies depending on your distribution ).
If you remove the file, of course it will complain.
This is not particularly standard, but ain't bad either, it helps structuring the conf rather than having a very long file.
The task of the .https version will certainly be to redirect the https traffic towards a service that can handle HTTPS (stunnel or nginx usually), since haproxy cannot terminate ssl connections. (stunnel has to be patched, see on the haproxy page)
If you want you can merge those files into one or two, just find out how haproxy is launched (check for init.d or let us know which distribution) and fix it appropriately.
I believe that it is only /etc/haproxy/haproxy.cfg that is used by default.
This may be of use to you (1.4 configuration reference):
http://haproxy.1wt.eu/download/1.4/doc/configuration.txt