I received a list (XML) of SRSes by request:
http://gis1:8080/geoserver/wms?SERVICE=WMS&VERSION=1.1.1&REQUEST=GetCapabilities
then parse it and get a list of EPSG codes like "EPSG:1234"
How I can now get a name of specific SRS like "Pulkovo 1942 / Gauss-Kruger zone 13" for "EPSG:28413"?
...or may be I can do it by OpenLayers API?
OGC services do not provide such facilities, they are built under the assumption that you have a EPSG database already available in the client.
You can try using some free online service to get to a name (with the perils of a service that is not guaranteed to be available 24x7 of course), like:
https://www.epsg-registry.org/
http://epsg.io/
http://spatialreference.org/
A better solution production wise, if you are using GeoServer, is probably to create a WPS process in GeoServer that would do the same job (or create a REST service of your own based on other open source libraries).
Related
I have added GTM and GA4 to some website apps to produce tables of detailed stats on click-throughs of ads per advertiser for a date range. I now have suitable reports working successfully using Data Studio, but my attempts to do the same using the PHP implementation of Analytics Data API V1 Beta (in order to do batch runs covering many date ranges) repeatedly hit a brick wall: the methods needed to analyse the response from instantiating BetaAnalyticsDataClient and then invoking runPivotReport or batchRunReports or batchRunPivotReports (and so on) appear not be specified.
The only example that I could work from is the ‘quickstart’ one that does a basic dimension and metric retrieval, and even this employs:
getRows()
getDimensionValues()
getValue()
getMetricValues
that do not appear in the API documentation, at least that I can find.
The JSON response format for each report is of course documented: for example the output from running runPivotReport is documented as an instantiation of runPivotReportResponse.
But nowhere can I find a specification of the methods to be used to traverse the JSON tree (vide getDimensionValues() above) and extract some output data.
Guesswork has taken me part way, but purely for example, when retrieving pivot data, should a
getPivotDimensionHeaders()[0]
be followed by a
getDimensionValues()
or a
getPivotDimensionValues()
I am obviously approaching this all wrong, but what should I do, please?
A customer has implemented an OPC-UA server and has provided some documentation for us to access it. The only information we have is the endpoint to contact the server at and the tags that the data points are linked to.
I have to implement a client without having access to the server to test it with. Is this enough information to go by? I imagine we would at least need some namespace uri. From what I understand, in order to use a function such as translateBrowsePathsToNodeIds I would also need to know some namespace ID's.
For instance, in python-opcua it would be something like:
mynode = client.uaclient.translate_browsepaths_to_nodeids(ua.QualifiedName("StaticData", 3)) (which somehow is not working but that's another question)
It doesn't help that the client examples I find somehow all use hardcoded namespace ID's.
TranslateBrowsePathToNodeIds is generally used when programming against type definitions where you know what the path of BrowseNames will be because they are defined by the type definition of each node in the path.
If this doesn't sound like your situation then you should push back for the documentation to include the NodeIds of all the Nodes you need to access.
I am currently working at an Powershell script that will automatically download the billing data from my companies CSP platform. I use following powershell module (https://github.com/Microsoft/Partner-Center-PowerShell)
And use the following api(that the module calls I guess) To get the overall price from the last month of all my subscriptions. https://learn.microsoft.com/en-us/partner-center/develop/get-a-subscriptions-resource-usage-information. With the Powershell module that is wonderfull I've managed to recieve good data from my Azure Resources and print them out in a CSV file for Power BI to create a report of them.
My question now is when I use power BI to create graphs of my cost and usage. I don't see the name of my resources (vm, storage, sql) instead I see names of the types (Read Operations,LRS Write Additional IO, ...) And of course that's also a good indicator. I would love to see the VM (the name of the vm or storage or sql) with the highest cost and usage not which type. The ResourceName in the response of the api is not exactly right. The Resource name ( in Resource URL formatting) however is available with this api: https://learn.microsoft.com/en-us/partner-center/develop/get-a-customer-s-utilization-record-for-azure.
But here I cannot retrieve the cost of my Azure Resource. I tried to combine the 2 api's (one to retrieve the cost and the other one to retrieve the Resource URL with the Resource ID as a combiner) but strangly enough some customers have data in the usage api and not in the utilization api. So that didn't helped out well. My question today is: Is it possible to retrieve the resource name or URL with the response data i got from this api: https://learn.microsoft.com/en-us/partner-center/develop/get-a-subscriptions-resource-usage-information . Or is there another way of providing the name of the object instead of the service.
The resource usage feature will not return the resource name. So, you will need to combine data from Get-PartnerCustomerSubscriptionUtilization and Get-PartnerAzureRateCard to get the resource name and the partner cost associated with each billable meter. Since you are working with Power BI, it might be a good idea to check out Partner-Center-Query. It is another project that allows you query data from the Partner Center API through Power Query.
What I meant was: How do we know what requests a particular URI of the container accepts and what parameters we can use?
For example:
the container URI: http://example.com/containers/container1
-> Now I want to know a way to access the metadata of the container. How do I do it?
The main reason I am trying to ask this question is I am working on migration of Fedora Commons from 3 to 4. And I am confused by many different schemas and notations. In some places, they use http://something.com/smthng/fcr:metadata.
At some places, they use http://something.com/smthng/metadata. Sometimes, fedora namespace will work in the URI and in some places it won't work. I am confused.
I want to know a way to know to all the accepted conventions on a URI.
I really don't think you have to go beyond Wikipedia's Uniform Resource Identifier definition to understand the standards for URIs, URLs and URNs.
However, the question is more likely about the Linked Data Platform. If you go to Concept Mapping - Fedora 3 to 4 , the links for Fedora 4 go to the W3C Linked Data Platform (LDP) recommendation. That basically states how the REST API works to query RDF data.
LDP containers are a way to partition RDF data so you can query the container and get a list of RDF resources. I don't think there is a way to query their metadata. The set of available containers defined in the data itself, and are not required. I.e. the data may be entirely defined with resources and containers are just a way to partition RDF data. If you have SPARQL access to the data, one idea is to query the data looking for LDP containers. Then you can sent REST requests to get that data.
(BTW, A RDF text serialization is a text-based representation of RDF graphs. Using a text serialization allows users to exchange data in a standard format. RDF standards include RDF/XML, Turtle, N-Triples, and JSON-LD.)
I just installed memcache in my machine.Actually,i have to create regions in there.
For eg there should be 3 regions created ,each storing a set of data.Am not sure how can i do that in memcache.Can anyone please help/give eg as it is "urgent".
Thanks in advance
Memcached itself is just a service that runs on your server. It can be connected to and commands sent to it over a text based protocol. Once the service is running, you can use a tool such as telnet or netcat to interact with it.
As for accessing from Java, you'll probably want a library to do most of the work for you. There were a few listed on this question: Java Memcached Client
Now as for your regions: memchached is basically a key/value table. To set a region you'd do something like memcached.set("key", yourData) and to get it back you'd do something like yourData = memcached.get("key")
Note that these functions will vary depending which library you are using.