I am trying to explore landscape server in my lab environment everything is working as expected. I am using self signed certificate in lab environment.
In my staging environment i am using letsencrypt certificate for HTTPS.
I have added 2 host in landscape and both are showing on dashboard and all tab are working as expected except notification tab which show any system is asking for reboot or package upgrading etc.
I am getting below error in appserver.log
Aug 19 12:46:15 appserver-1 ERR https://abc.xyz.com/account/standalone/alert/13/resolve#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/dist-packages/zope/publisher/publish.py", line 129, in publish#012 obj = request.traverse(obj)#012 File "/usr/lib/python2.7/dist-packages/zope/publisher/browser.py", line 540, in traverse#012 ob = super(BrowserRequest, self).traverse(obj)#012 File "/usr/lib/python2.7/dist-packages/zope/publisher/http.py", line 457, in traverse#012 ob = super(HTTPRequest, self).traverse(obj)#012 File "/usr/lib/python2.7/dist-packages/zope/publisher/base.py", line 260, in traverse#012 obj = publication.traverseName(self, obj, entry_name)#012 File "/usr/lib/python2.7/dist-packages/zope/app/publication/zopepublication.py", line 198, in traverseName#012 ob2 = adapter.publishTraverse(request, nm)#012 File "/opt/canonical/landscape/canonical/routes/publisher.py", line 158, in publishTraverse#012 request.response.redirect(location, trusted=trusted)#012 File "/usr/lib/python2.7/dist-packages/zope/publisher/browser.py", line 759, in redirect#012 return super(BrowserResponse, self).redirect(location, status, trusted)#012 File "/usr/lib/python2.7/dist-packages/zope/publisher/http.py", line 886, in redirect#012 % target_host)#012ValueError: Untrusted redirect to host '1.2.3.4:443' not allowed.
If i access the landscape dashboard using IP instead of domain name i.e abc.xyx.com then its working fine.
So its issue with redirection but ubable to fix it.
I have change the domain name and IP for security purpose.
Please help me.
This issue is fixed. By following the below steps:
1. Go to Organisatios->Settings
2. In Root URL provide your domain name. Earlier i have my IP now its replaced with domain name and everything is working fine
Thanks.
Related
So I have two V2 Composers running in the same project, the only difference in these two is that in one of them I'm using the default subnet and default values/autogenerated values for cluster-ipv4-cidr & services-ipv4-cidr. In the other one I've created another subnet in the same (default VPC) which is in the same region, but a different IP range, and I reference this subnet when creating the Composer, additionally I give it the services-ipv4-cidr=xx.44.0.0/17 and services-ipv4-cidr=xx.45.4.0/22.
Everything else is the same between these two Composer environments. In the environment where I have a custom subnet I'm not able to run any KubernetsPodOperator jobs, they return the error:
ERROR - Exception when attempting to create Namespaced Pod:
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 111, in run_pod_async
resp = self._client.create_namespaced_pod(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 6174, in create_namespaced_pod
(data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs) # noqa: E501
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api/core_v1_api.py", line 6251, in create_namespaced_pod_with_http_info
return self.api_client.call_api(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 340, in call_api
return self.__call_api(resource_path, method,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 172, in __call_api
response_data = self.request(
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 382, in request
return self.rest_client.POST(url,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 272, in POST
return self.request("POST", url,
File "/opt/python3.8/lib/python3.8/site-packages/kubernetes/client/rest.py", line 231, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (404)
and this pod does not appear if I go to GKE to check workloads. These two GKE envs use same composer service account, K8s service account and namespaces, but from my understand that is not an issue. Jobs outside of the K8sPodOperator work fine. I had a theory that perhaps the non-default subnet needed additional permissions but I wasn't able to confirm or deny this theory yet.
From the log I can see that the KubernetesPodOperator can't locate the worker, even though from the UI I can find it, and also non-KubernetesPodOperator jobs do this succesfully.
Would appreciate some guidance on what to do/where to look?
I'm trying to run flutter_stripe's example app. I forked and cloned the Github repository to my laptop.
Starting the yarn server results in 18 errors. All start with Object is of type 'unknown'. All are error or e or err, on lines 130, 301, 442, 450, 451, 455, 456, 464, 578, 586, 587, 591, 592, 595, 599, and 600. Then it says Command failed with exit code 2.
Is this a null safety issue? How do I fix it?
Your existing github issue with the library maintainers is likely to be your best source of help, however reading that I noticed you said:
In the last step, setting up server/.env, my Stripe account has pk_test and a pk_live Publishable and Secret Keys. My guess is that I should use the pk_test keys in server/.env.example. Let’s make this clear in the comment at the top of server/.env.example.
This seems to be a misunderstanding of your Stripe API keys. There are secret keys (sk_) for your server and publishable keys (pk_) for your client-side application as a matching pair, and there is a pair for each of live and test mode. You need to use a matching secret and publishable key from your dashboard.
Additionally, when setting up secrets in environment files, you'll typically be creating a .env file in the server/repo root directory. I read the above as though you might be trying to set up your keys in the .env.example file which I don't expect would work. You should check with the developer of the library/example about this if .env doesn't work.
The sample below demonstrates failure to authenticate to google service account using the key created just the few lines above using python api.
I was not able to find any document on how these, programmatic keys, can be used.
The keys created by clicking thru console UI are working just fine.
However, for our use case, we need to create the keys using programmatic ways.
There is unanswered issue at github as well: https://github.com/googleapis/google-cloud-python/issues/7824
logger.info("Created new service account: {}".format(ret))
logger.info("Getting the new service account key")
request=iam.projects().serviceAccounts().keys().create(name=ret['name'],
body={'privateKeyType':'TYPE_GOOGLE_CREDENTIALS_FILE'})
key=request.execute()
>>>print json.dumps(key, indent=4) #just to verify what we got
{
"keyOrigin": "GOOGLE_PROVIDED",
"name": "goodandvalidname",
"validBeforeTime": "2029-06-28T15:09:59Z",
"privateKeyData": "datadata",
"privateKeyType": "TYPE_GOOGLE_CREDENTIALS_FILE",
"keyAlgorithm": "KEY_ALG_RSA_2048",
"validAfterTime": "2019-07-01T15:09:59Z"
}
>>> credentials = google.oauth2.service_account.Credentials.from_service_account_info(key)
Traceback (most recent call last):
File "/home/user/.p2/pool/plugins/org.python.pydev.core_7.2.1.201904261721/pysrc/_pydevd_bundle/pydevd_exec.py", line 3, in Exec
exec exp in global_vars, local_vars
File "<console>", line 1, in <module>
File "/home/user/.local/lib/python2.7/site-packages/google/oauth2/service_account.py", line 193, in from_service_account_info
info, require=['client_email', 'token_uri'])
File "/home/user/.local/lib/python2.7/site-packages/google/auth/_service_account_info.py", line 51, in from_dict
'fields {}.'.format(', '.join(missing)))
ValueError: Service account info was not in the expected format, missing fields token_uri, client_email.
Any help appreciated.
Answering my own issue and probably helping others...
The 'key' we get from python APIs is NOT the 'json key' as obtained from gcloud. The dict we get from iam.projects().serviceAccounts().keys().create() contains the field privateKeyData which itself contains ENTIRE 'json key' one needs to authenticate to google cloud.
The data in this field is base64 encoded and needs decoding, and subsequently dumping to json. Below is the snippet from functional code, demonstrating the credentials are loaded back from such key:
request=iam.projects().serviceAccounts().keys().create(name=ret['name'],
body={'privateKeyType':'TYPE_GOOGLE_CREDENTIALS_FILE'})
key=request.execute()
key=base64.decodestring(key['privateKeyData'])
key=json.loads(key)
credentials = google.oauth2.service_account.Credentials.from_service_account_info(key)
I figured this out by stepping thru gcloud service account key creating, line by line, using python debugger. Hope this helps others.
I am trying to set dropbox as custom backup destination following below cpanel blog. The connection is working, but the backup files are not being transferred to DropBox. And when I press validate to custom backup destination it gives following error .
https://blog.cpanel.com/cpanel-whm-custom-backup-transport-example-dropbox/
Error: Validation for transport “dropbox” failed: Could not list files in
destination: Executed /usr/local/bin/backup_transport_dropbox.pl ls /
remotehost remoteuser : 2018-08-26T15:54:21 [WebService::Dropbox] [ERROR]
https://api.dropboxapi.com/2/files/list_folder {"path":"/"} -> [400] Error in
call to API function "files/list_folder": request body: path: Specify the root
folder as an empty string rather than as "/". at
/usr/local/share/perl5/WebService/Dropbox.pm line 184.
I am new to dropbox api and have no idea of perl so could not figure out what is discusses on below links.
https://github.com/silexlabs/unifile/issues/77
The error message is correctly indicating that the Dropbox API expects the value "" when referring to the root alone. The code is instead sending the value "/". This looks like a bug in the code.
It looks like you've already opened an issue with the developer for this:
https://github.com/CpanelInc/backup-transport-dropbox/issues/3
They should update the code to use "" when referring to the root folder on Dropbox.
I work in an Azure environment. I have a VM that runs a Django application (Open edX) and a Mongo server on another VM instance (Ubuntu 16.04). Whenever I try to load anything in the application (where the data is fetched from the Mongo server), I would get an error like this one:
Feb 23 12:49:43 xxxxx [service_variant=lms][mongodb_proxy][env:sandbox] ERROR [xxxxx 13875] [mongodb_proxy.py:55] - Attempt 0
Traceback (most recent call last):
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/mongodb_proxy.py", line 53, in wrapper
return func(*args, **kwargs)
File "/edx/app/edxapp/edx-platform/common/lib/xmodule/xmodule/contentstore/mongo.py", line 135, in find
with self.fs.get(content_id) as fp:
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/gridfs/__init__.py", line 159, in get
return GridOut(self.__collection, file_id)
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/gridfs/grid_file.py", line 406, in __init__
self._ensure_file()
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/gridfs/grid_file.py", line 429, in _ensure_file
self._file = self.__files.find_one({"_id": self.__file_id})
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/pymongo/collection.py", line 1084, in find_one
for result in cursor.limit(-1):
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/pymongo/cursor.py", line 1149, in next
if len(self.__data) or self._refresh():
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/pymongo/cursor.py", line 1081, in _refresh
self.__codec_options.uuid_representation))
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/pymongo/cursor.py", line 996, in __send_message
res = client._send_message_with_response(message, **kwargs)
File "/edx/app/edxapp/venvs/edxapp/local/lib/python2.7/site-packages/pymongo/mongo_client.py", line 1366, in _send_message_with_response
raise AutoReconnect(str(e))
AutoReconnect: timed out
First I thought it was because my Mongo server lived in an instance outside of the Django application's virtual network. I created a new Mongo server on an instance inside the same virtual network and would still get these issues. Mind you, I receive the data eventually but I feel like I wouldn't get timed out errors if the connection is normal.
If it helps, here's the Ansible playbook that I used to create the Mongo server: https://github.com/edx/configuration/tree/master/playbooks/roles/mongo_3_2
Also I have tailed the Mongo log file and this is the only line that would appear at the same time I would get the timed out error on the application server:
2018-02-23T12:49:20.890+0000 [conn5] authenticate db: edxapp { authenticate: 1, user: "user", nonce: "xxx", key: "xxx" }
mongostat and mongotop don't show anything out of the ordinary. Also here's the htop output:
I don't know what else to look for or how to fix this issue.
I forgot to change the Mongo server IP's in the Django application settings to point to the new private IP address inside the virtual network instead of the public IP. After I've changed that it don't get that issue anymore.
If you are reading this, make sure you change the private IP to a static one in Azure, if you are using that IP address in the Djagno application settings.