I'd like to know how you can know the current hostname from a Root Object in Surf.
I'm writting a webscript which serve a JNLP, so I have no page context, and ${url.context} only returns /share.
I'm looking how I can have the hostname http://foobar.com or if this is possible at all from server side.
I've been through: http://wiki.alfresco.com/wiki/Surf_Platform_-_Freemarker_Template_and_JavaScript_API# without success
On the repository side, the URLs for Alfresco and Share are known. You can get all the individual parts from SysAdminParams and you can use UrlUtils to have the different parts joined together for you to give a full URL.
As far as I can tell though, these details are only ever held on the Alfresco Repository tier, and are never passed over to Share. All the absolute URLs in Share seem to be generated in the Repo and sent over
One option then is for you to change your webscript to be a repo webscript rather than a share one. That'll give you access to the appropriate beans. Share will proxy Repo webscripts for you, so you can still access it directly in Share as the logged in user. You'll want a URL a bit like /share/proxy/alfresco/my/web/script to access them.
Otherwise, create a new repo webscript that exposes the useful bits of the SysAdminParams and Share URLs, and have your share webscript fetch it (likely with caching). There are lots of examples of that too to work with.
Have you tried ${url.serverPath}
Or else you can use the default alfresco-global.properties like
share.context=share
share.host=foobar.com
share.port=80
share.protocol=http
These properties can be injected anywhere you want, take look here: Accessing values from Alfresco's alfresco-global.properties file
The only thing you'll need to be aware of, is that these properties are repository related.
So you can write a repo webscript which will give all the props to Share.
And since you're using Share, one can easily use client-side JavaScript to determine the host ;).
Related
I am new to Squarespace and I was wondering if it can interact with an external Rest-API using JSON?
For example, say I have a Database being hosted privately and I want data from it to be shown via Squarespace and certain pages being restricted according to the user's privileges.
Is any of the above possible, and if so can you direct me to an example? I seem not to be able to find anything on the above via google.
Thanks
From Squarespace:
Squarespace doesn’t support server-side code, including PHP, Ruby, Ruby on Rails, and SQL.
Therefore, the only way to connect to an external API (besides those supported by Squarespace's official 'extensions') is to use "client-side" (in-browser) JavaScript.
So, the database solution which you use must be capable of securely handling client-side connections (for example, Firebase can do that). To interface with it, you must add the JavaScript to your Squarespace site via code block or code injection. An example explanation of doing that can be found at this question.
As to allowing/disallowing content based on data returned from the database, it can be done, but only client-side. That means that, while you can make the site appear to restrict access and/or make it inconvenient for others to access certain pages based on information in the database, because it is all client-side, it could technically be circumvented by someone if they are familiar with web-development, web-inspector, etc. So it's not something you'd want to do if it is critical that the content be truly restricted.
Squarespace does have its own "Members Areas" which can be used to solve content access problems. However, it's extremely limited at the moment, and there are many scenarios it does not address.
I've looked around the site to see if there are any people who have changed the CKAN API interface so that instead of uploading documents and databases, they can directly type onto the site, but I haven't found any use cases.
Currently, we have a page where people upload data sets through excel forms that they've filled out, but we want to make it a bit more user friendly by changing the API so that they can fill out a form on the page rather than downloading the template, filling it out and then uploading it.
Does CKAN have the ability to support this? If so, are there any examples or use cases of websites that have use forms rather than uploads?
This is certainly possible.
I'm not aware of any existing extensions that provide that functionality, but you can check the official list of CKAN extensions if there's anything that fulfills your needs.
If there is no existing extension that suits you then you could write your own, see the extension guide for details on how to do that.
Adding an API function to CKAN's API is possible, but probably not what you want in this case: the web UI usually does not interact with CKAN via the API but via Flask/Pylons controllers. Hence, you would add your add controller which first serves your form and then processes the submitted inputs.
You can take a look at the ckanext-pages extension, which does exactly that (for editing static pages instead of datasets, but your code would be similar).
I have a very specific use case for AEM so maybe you have a solution or an alternative.
I'd need to be able to store the html version of the page in JCR (as it is stored in the dispatcher) so that I would be able to retrieve it in a separate API call from a different system.
Have you had this problem before, or do you have any idea how that could be achieved?
Many thanks
I would strongly suggest not to have the .html page stored inside the AEM instance as that is not core objectives of the AEM as content management system. You should use your dispatcher instance (cached files) and try to keep it on an FTP server and share the file URL with different instances as a data.
Again, please note that AEM instances are very sensitive and should mainly focus on the pages/components. In this case, you are storing the page which is generated out of the components and every time when they generated or modified you need to update this again. it will be a burden. Hence, I am suggesting to take it from the dispatcher instance which will happen as part of the publishing process.
Let me know if you have any other thoughts.
I am using github to share a set of SPARQL queries:
http://www.boisvert.me.uk/opendata/sparql_aq+.html?file=specific%20sensor.txt
Currently the simple work allows end-users to access queries stored on the github repository, but ultimately I want to allow them to also modify the queries, as with a pastebin, and make use of the repository to better manage the shared system. Ideally I would want end-users who may not be very tech-savvy, to be able to make minor changes to queries to an open, linked data endpoint: so to keep the technology barrier low.
My problem is this: how best to structure the github project and exploit the API to make the most of the available information? I can think of different points:
Currently the project (https://github.com/boisvert/unshaql) holds client code and example queries. Does it make a difference to create an independent project (separate from the web client code) for SPARQL queries?
I would use directories within the project to classify/tag queries, and file names to title them. Are there better alternatives? It strikes me that a hierarchical structure is not a good fit to tags.
When end-users save, a simpler (and cruder) option is to allow them to push their file into just one branch, which holds the examples. A better engineered one would be to allow them to use their github credentials to fork the set of SPARQL queries and edit theirs, but with unaware users, how do I avoid creating a mess?
I think that a rigular Github repository is a rather bad fit for this kind of content. If your users have a GitHub account, you should probably use Gists instead: https://help.github.com/articles/about-gists/ I never used this myself, but it seems perfectly adapted to what you are planning. Your site could become a DB of tags over user-provided gists. That would however lock you into GitHub-specific solutions.
Even if you go for a regular repository, you should not allow the users to commit into the repository hosting your code: that would be a serious security hazard as you won't be able to control the parts of the repository to which they are allowed to commit.
If you setup two repositories, it's rather easy to have the code of a webpage in a repository, and the code automatically commited in another repository (under an anonymous identity so that your users don't have to create a github account).
Also, note that the oauth token should never be stored in a public repository (or the GitHub robots will invalidate it in a matter of hours).
See Hiding GitHub token in .gitconfig for a solution to this sub-problem.
I'm currently using the Jira SOAP interface within a C# (I suppose the language used here isn't terribly important).
Basically, I'm creating an API and a Winform that wraps some of the functionality of the soap service so that our Devs can programmaticly add bugs when something goes wrong in our application.
As part of this, I need to know the custom field IDs that are in use in Jira, rather than hardcoding them (as they are still prone to the occasional change) I used the GetCustomFields() method in the jira-rpc api then filtered it, so that all the developer needs to know is the name of the field, then the ID is filled in for them automagically.
This all works fine, but with one quite important proviso: that you login to the SOAP/RPC service as a user with administrative privaliges.
The Jira documentation indicates that the soap/rpc service follows the usual workflows and security schemes, however I can't find anything anywhere that would appear to remove this restriction on enumerating custom fields (and quite why in any instance you would want someone to HAVE to be an administrator to gain this access, especially as the custom field id's tend to be in Jira's HTML source is beyond me)
Does anyone know if I've missed a setting somewhere? Or if there is some sort of work-around for this, short of hardcoding the custom field id's?
Or is this a case of having to delve in to Jira's RPC plugin and modifying the source for it in order to give me the functionality I require?
Cheers
Edit for the sake of google/posterity
Wow, all this time on, and it looks like Atlassian still haven't changed this behavior.
Worked around this by creating a custom dictionary that logs in as an administrative user, grabs the custom fields and then logs out. Not ideal, but it should work 'til atlassian change things
You're not missing anything - there's no way to get custom fields via standard SOAP API.
In JIRA Client, we learn about custom fields in two ways:
We download issues via RSS view of the issue navigator, or via XML representation of a specific issue. If a custom field is set for an issue, the XML will have its id, class and value (values).
From time to time we inspect the content of IssueNavigator search page - looking for searchers for the custom fields. Screen-scraping the HTML gives us not only ids of the custom fields but also possible values for enum fields.
This is hackery, of course, and it may go wrong, so a good API would have been a lot better.
In your case, I can suggest two solutions:
Create your own SOAP (or REST) remote API plugin that will give you just that info that you miss from the standard API. Since you're seemingly in control of your JIRA, you can install anything there.
Screen-scrape the "New Bug" page for the project and type of issue you need to submit. You'll get all the info - fields, options, default values, which field is required.