I try to use visual-recognition of Watson IBM.
In the web visual-recognition i try to train my class but the trains failed.
I don't know what i am doing worng.
I upload 2 zip each one contains 30 jpg files of house photos.
After the uploading it is training but after some minutes it become red and failed.
In addition, I try to download the example training data of IBM (the Beagle) and it also faild.
Please help me notice what i am doing worng.
Thanks,
Niv.
There are a lot of additional requirements regarding your training data. For instance the size of single pictures or entire file. Try to check your training data with this requirements:
https://www.ibm.com/watson/developercloud/doc/visual-recognition/customizing.html#size-limitations
In addition its possible that your training statement has a syntax error.
Related
I'm working on bark-beetle outbreaks in the Alpine forests. To do so, I work on UAV-acquired orthophotos with multispectral bands (RGB, NIR, RE). I want to proceed to a raster (VRT) supervised classification, according to field-acquired ROIs.
I successfully did this using SAGA GUI. I'm now trying to repeat the same process but with using QGIS, as I have all my working layers sorted in a project. I want to get the same supervised classification with the built-in SAGA extension, but the algorithm asks here (but not in SAGA GUI) for a mandatory "Load Statistics from File" parameter.
How do I have to set this parameter?
By reading SAGA documentation, I saw it should be the path to the stats file (about my raster to classify?), but no further informations were provided about the content of this stats file. I don't know how to create it, nor if there is a way to create it using QGIS or SAGA GUI.
Neither did I find help about this in the SAGA documentation or somewhere else on internet.
I am a beginner on TALEND and I would like to be able to compare the lines of different sources to be able to make a historization.I make a filter but it does not work. I would like to filter on the updates that have a modification and reject those that have not had any change.Thank you in advance for your help here is my JOB
enter image description hereenter image description here
I'm currently using openmaptiles in order to generate planet tiles (zoom 0 to 14 or 15). This is a long process that I plan run on dedicated servers.
I know that this is a service offered by openmaptiles but I can't afford spending $1200 or $1000 to generate or buy the tiles.
It's written in the README of openmaptiles project that the quickstart.sh isn't optimized for planet rendering. This is why I'd like to know how I could optimize the configuration to make it as fast as possible.
To be clear, I will use mbutils to generate tiles from mbtiles file, allowing me to run the planet generation on different servers with different zoom levels (i.e zoom 1 to 9 on a first server, and 10 to 14 on another one). This way, I will collect different mbtiles files that I will use to generate and merge .pbf tiles with mbutils.
I read an issue but it didn't change anything for me.
Maybe I can also remove some layers that won't be used on my map ? (How to do that ?)
ATM, when I run a script, it doesn't seem that it's using the full CPU capacities.
Thanks for your help
I found a way to accelerate the process:
Make a PR of openmaptiles/generate-vectortiles repo that contains the dockerfile of the main container for this project.
In the background, this container uses mapbox's tilelive project that allows to split a big job in smaller ones.
I added two environment variables:
JOBS: The number of job it should be splitted in
JOB_NUM: The job number to run
The fork is here: https://github.com/qlerebours/generate-vectortiles
It allows to paralellize the process if you have multiple servers to generate this.
you can restrict layers returned by modifying: https://github.com/openmaptiles/openmaptiles/blob/master/openmaptiles.yaml
Inside openmaptiles.yaml - reduce the layers entry so it contains the only layers that you require.
For example, I only required building data so I changed the file so that the layers section only contained the following.
layers:
- layers/building/building.yaml
I worked this out by going through the history of the openmaptiles repository. It worked for me.
Hope this helps! If you find another ways to speed up the process, it would be good to share!
Thanks
-Rufus
We are trying to let NetLogo take the real time data, but we didn’t found any threads helpful online that tells how.
We used historical stock price data to train our agents in the first stage. After the end of the training phase, we would like to use real time data to test the strategies generated by agents. In order to do this, we will need NetLogo to take the real time data online. Is there a way to let NetLogo to read stock prices online, eg. Yahoo Finance, and run automatically?
Could you please give us some hints on how to implement this in NetLogo? If NetLogo is incapable of doing this. Can anyone suggest other agent-based modeling tools that can do this?
Thankssssss.
You can use the NetLogo web extension to get real time information from any stock price API.
Looks like Yahoo has a pretty simple API.
To use their API to, for example, get Google's latest stock price, you'd do something like:
web:make-request "http://download.finance.yahoo.com/d/quotes.csv" "GET" [["s" "GOOG"] ["f" "l1"] ["e" ".csv"]]
Currently this gives me:
observer> show web:make-request "http://download.finance.yahoo.com/d/quotes.csv" "GET" [["s" "GOOG"] ["f" "l1"] ["e" ".csv"]]
observer: ["556.65" "HTTP/1.1 200 OK"]
That result is a list where the first element is the actual contents of the response (in this case, the price as a string) and the second is the whether or not the request was successful. 200 means it worked.
I have created some links between agents (turtles) in NetLogo. This links will change at each time step. My aim is to export this data (i.e., turtles and links b/w them) to graph with vertices (turtles) edges (links), which can be given as input to Gephi. Is it possible to see the changes which occurs in netlogo in the graph when it is linked with Gephi. Can someone help me out. Thanks.
To export your network data in a format usable by Gephi, I would suggest using the nw:save-graphml primitive from NetLogo's NW Extension. This will give produce a file in the GraphML file format, which Gephi can read.
I guess you could re-save your network at each time step and overwrite your file, but I'm not sure if Gephi can display your changes dynamically. And depending on the size of your network, it might be slow.
Are you trying to use Gephi to see how the network changes over time, in a changing network that is generated by NetLogo? That's what #NicolasPayette's answer suggests, so I'll make the same assumption.
Gephi can display "dynamic graphs", i.e. networks that change over time. My understanding is that are two file formats that allow Gephi to import dynamic graphs: GEXF, and a special CSV (comma-separated) format that Gephi calls "Spreadsheet". Nicolas mentioned GraphML, which is a very nice network data format, but it doesn't handle dynamic graphs. And as far as I know, NetLogo doesn't generate GEXF or Gephi's "Spreadsheet" format.
However, the Gephi Spreadsheet format is very simple, and it would not be difficult to write a NetLogo procedure that would write a file in that format. This procedure would write new rows to the "Spreadsheet" CSV file on each NetLogo tick. Then Gephi could read in the file, and you'd be able to move back and forth in time, seeing how the graph changes. (You might need to use a bit of trial and error to figure out how to write Spreadsheet files based on the description on the Gephi site.)
Another option would be to display the evolving graph online using the graphstream protocol. Plugins for NetLogo as well as for gephi provide support for this.