I have created an overpass API server locally using docker (https://hub.docker.com/r/wiktorn/overpass-api). Suppose I would like to update an attribute of a node/way such as smoothness, surface, speed limit etc. with a new value for a single point or make a bulk update of points in my locally installed server (instead of the public server), how does one do that? Any feedback is appreciated.
Related
I recorded my jmeter script on server x and make it dynamic after that run that same script on server y - it fetch all data by post processor and did not give any error but data is not added on fronted . how can I solve it any reason behind it? (website is same just change the server for testing)
expected-Data should add on fronted like create lead on server y (successfully create on server x)
actual -data not added on server y
Most probably you need to correlate your script as it is not doing what it is supposed to be doing.
You can run your test with 1 virtual user and 1 iteration configured in the Thread Group and inspect request and response details using View Results Tree listener
My expectation is that you either not getting logged in (you have added HTTP Cookie Manager to your Test Plan, haven't you?) or fail to provide valid dynamic parameters. Modern web applications widely use dynamic parameters for example for client side state tracking or for CSRF protection
You can easily detect dynamic parameters by recording the same scenario one more time and compare the generated scripts. All the values which differ need to be correlated, to wit extracted from the previous response using a suitable Post-Processor and stored into a JMeter Variable. Once done you will need to replace recorded hard-coded value with the aforementioned JMeter Variable.
Check out How to Handle Correlation in JMeter article for comprehensive information with examples.
Hi I'm a student and I'm working for the first time on the broker. I understood how the creation of entities works and their updating through "update" queries. My question is: you can create an entity that contains variables (eg geolocation) defined with value "null" or "zero" and then initialize them with values that interest me. So as to have dynamic and non-static variables (ie that require user update)?
Or do we need interaction with the CEP to do this?
From what I have read in the fiware-orion guide when I create an entity (ex car having attributes and velocity coordinates: geopoint). The values of these 2 attributes must be set in a static way (ex: speed 100 and position coordinates 40.257, 2.187). If I understand the values of these attributes I can only update them by making an update query. So my question is:
Is it possible to update the value of the attributes that contain the position or the speed of the car in a dynamic way, ie without having to write the values from the keyboard? Or does this require the use of the CEP of orion?
If I could not explain myself more generally I would like to know if it is possible to follow the progress of a moving car without me having to add the values from the keyboard.
Thanks.
Orion Context Brokker exposes a REST-based API that (among other things) allows you to create, update and query entities. From the point of view of Orion, it doesn't matter who is the one invoking the API: it can be done manually (for instance, using Postman or curl) or can be an automated system developed by you or a third party (for instance, a software running in a sensor in the car that measures the speed and periodically sends an update using a wireless communication network).
From a client-server point of view (in the case you are familiar with these concepts), Orion takes the server of the API role and the one updating the speed (either manually or automatically) takes the role of client of the API.
We are deploying Tableau for a bank.
We had created 6 test dashboards using dummy data on a staging data base using sql connection and lets say has an ip 10.10.10.10.
Now we need to use the same view we had used with the dummy data on Live data but using a different connection which is again an sql engine & IP lets say as 20.20.20.20. All the variable names and other properties are the same, not difference is that the Live data would not have calculated fields which we can deploy on the Live environment.
The challenge is: the LIVE data being of a bank is highly confidential and cannot be used from outside operations site rather we need to deploy it from an ODC [restricted environment]. Hence we simply cannot do a replace data source.
Hence we are planning to move twbx files and data extracts for each of these views using a shared folder to the ODC. Then the process would be like below:
As the LIVE sql data base is different from the dummy sql we will get error
We will select edit data connection
Will select tableau data extract for each sheet and dashboard
Will then select the option of replace data source and select LIVE SQL database
Will extract the new data
The visualization should work fine
Earlier we had just moved TWBX files hence it failed. Is there a different approach to it.
I did something similar to it
For that, you must have
same schema as of Live database and dummy database
do not change name of any source table or column
create your viz
send it in the .tbw form which is editable HTML format
Now the hard part- open your tbw in notepad and replace all connection details to new one
save and open in the tableau
tell me if it didn't worked
One method would be to modify your hosts file on your local computer, pointing the production server name the staging instance of the database. For example, let's say your production database is prod.url.com and you have a reporting staging db server instance called reportstage.otherurl.com
Open your hosts file. Add an entry for prod.url.com. Point it to reportstage.otherurl.com
Develop the report in Desktop, with the db connection string to prod.url.com.
When you publish the twb file to Server, no connection string changes are needed.
Another easier way is to publish the twb to Server with your staging connection string but edit the connection string in the data source in Server.
Develop the twb file on your local computer against your staging database.
Publish the twb file to Server.
Go to the workbook on Server and instead of looking at the views, click on Data Sources.
Edit the data source(s) connection information. This allows you to edit the server name, port, username, or password.
I've used this second method quite a bit. We have an environment where we can't hit the production db outside of the data center. Our staging environment doesn't have that restriction. We develop against the stage db, deploy, and edit the server name in the data source.
I have a classic spray+slick http server which is my database access layer, and I'd like to be able to have an healthcheck route to ensure my server is still able to reach my DB.
I could do it by doing a generic sql query, but I was wondering if there was a better way to just check the connection is alive and usable without actually adding load on the database (or at least the minimum possible load).
So pretty much :
val db = Database.forConfig("app.mydb")
[...]
db.???? // Do the check here
Why do you want to avoid executing a query against the database?
I think the best health check is to actually use the database as your application would (actually connecting and running a query). With that in mind, you can perform a SELECT 1 against your DB, and verify that it responds accordingly.
I just installed memcache in my machine.Actually,i have to create regions in there.
For eg there should be 3 regions created ,each storing a set of data.Am not sure how can i do that in memcache.Can anyone please help/give eg as it is "urgent".
Thanks in advance
Memcached itself is just a service that runs on your server. It can be connected to and commands sent to it over a text based protocol. Once the service is running, you can use a tool such as telnet or netcat to interact with it.
As for accessing from Java, you'll probably want a library to do most of the work for you. There were a few listed on this question: Java Memcached Client
Now as for your regions: memchached is basically a key/value table. To set a region you'd do something like memcached.set("key", yourData) and to get it back you'd do something like yourData = memcached.get("key")
Note that these functions will vary depending which library you are using.