Why am i getting a difference between matlab's "lla2eci" and "sgp4.propagate"? - matlab

I am not experienced in this area but over the past few days I've put together some code in python that tracks (hopefully) the ISS. I've done the math and have that side of things working, but only when I inject the satellite position using matlab's lla2eci. To get a correct answer, I take the latitude and longitude of the satellite's subpoint from live data and convert that to eci using matlab. This method gives me correct look angles (azimuth and elevation) for the ISS, and I've confirmed them with the pyephem method using iss.compute(home) where "home" is my lla.
I'm comparing matlab's lla2eci to what satellite.propagate(...) is getting me and at time = 2019 12 16 8 53 19, i get the following results:
Matlab lla2eci: x,y,z = (3873.9, -902.18, -4969.9)
sgp4 propagate: x,y,z= (-4082.5, 3458.3, -4195.1)
I have to be missing something here! Any help would be greatly appreciated, and I'm glad to answer any questions to clarify.

Looking at the question, seems like you are not taking Altitude into account?
Since your aim is to track the ISS using a python code may I suggest a slightly different approach?
TLE values for space objects are available at: https://www.space-track.org/, so sign-up there.
Then find the position of the satellite in python by using sgp4(https://pypi.org/project/sgp4/) and spacetrack(https://pypi.org/project/spacetrack/) libraries.
An example code would look like this:
from sgp4.earth_gravity import wgs84
from sgp4.io import twoline2rv
from spacetrack import SpaceTrackClient
from datetime import datetime
#generate TLE from database
st = SpaceTrackClient('YOUR_USERNAME', 'YOUR_PASSWORD')
tle = st.tle_latest(norad_cat_id=[<ISS_NORAD_CAT_ID>], ordinal=1, format='tle')
line1 = tle[:69]
line2 = tle[70:-7]
#create satellite object
satellite = twoline2rv(line1, line2, wgs84)
date_time = datetime.utcnow()
#find position
sat_position, sat_velocity = satellite.propagate(date_time.year, date_time.month,...
date_time.day, date_time.hour, date_time.minute, date_time.second)
Use your own username, password and norad ID.
welcome to stackoverflow :)

Related

How can I search street or postal address on simplekml without using coordinates

I have a list of street addresses, weekly my database grows. For example 10 Hage Geingob Walvis Bay. I need to plot their positions on Google Mymaps (or alternatively Google Earth) using the kml file. I'm doing it via Python. I'll eventually have control over the labels, colors, titles, descriptions offering me some value.
However the code seems to only respond to coordinate (lat/long) input, using the "coords" command. When using "address" command it simply targets the 0,0 lat/long position in the ocean.
I would appreciate any help on how to use the a street address as the input directly into simplekml...as if I typed the street address directly into google maps or google earth.
As a workaround I have used geocode functions such as geopy and Nominatim ...to establish GPS coordinates and then feed that to simplekml ...but accuracy is terrible. Some addresses are accurate, some are miles off.
I'm not interested in setting up keys and API's in GoogleV3.
Code used as workaround
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="redacted")
location = geolocator.geocode(address)
print(location.address)
print((location.latitude, location.longitude))
import simplekml
kml = simplekml.Kml() # Create an instance of Kml
single_point = kml.newpoint(name=address, coords=[(location.longitude,location.latitude)])
kml.save("10 Hage Geingob.kml")```
When in my view it should be possible to do the following according to the simplkml documentation...
```street_address = '10 Hage Geingob Walvis Bay'
import simplekml
kml = simplekml.Kml()
kml.newpoint(name=street_address, address=street_address)
kml.save("10 Hage Geingob.kml")```
...but it doesn't work. Any help is much appreciated thank you. I am indeed a complete noob.
Thank you

Plot a graph with ipycytoscape (and networkx)

Following the instructions of ipycitoscape I am not able to plot a graph using ipycitoscape.
according to: https://github.com/QuantStack/ipycytoscape/blob/master/examples/Test%20NetworkX%20methods.ipynb
this should work:
import networkx as nx
import ipycytoscape
G2 = nx.Graph()
G2.add_nodes_from([*'ABCDEF'])
G2.add_edges_from([('A','B'),('B','C'),('C','D'),('E','F')])
print(G2.nodes)
print(G2.edges)
cytoscapeobj = ipycytoscape.CytoscapeWidget()
cytoscapeobj.graph.add_graph_from_networkx(nx_graph)
G2 is a networkx graph example and it looks ok since print(G2) gives the networkx object back and G2.nodes and G2.edges can be printed.
The error:
ValueError: invalid literal for int() with base 10: 'A'
Why should a node be an integer?
More general what to do if the starting data point if a pandas dataframe with a million rows edges those being strings like ProcessA-ProcessB, processC-processD etc
Also having a look to the examples it is to be noted that the list of nodes is composed of a dictionary data for every node. that data including an "id" per node and also "Atribute". The surprise here is that the networkx Graph should have all those properties.
thanks
This problem was fixed. See attachment.
Please let me know if it's still happening. Feel free to open an issue: https://github.com/QuantStack/ipycytoscape/
I'm just playing around with ipycytoscape myself, so I could be way off-base, but, shouldn't the line be:
cytoscapeobj.graph.add_graph_from_networkx(G2) # your graph name goes here
Trying to generate a cytoscape object built on a graph that doesn't exist might trigger a ValueError because it can't find any nodes.

Transforming dates in tensorflow or tensorflow extended

I am working with Tensorflow Extended, preprocessing data and among this data are date values (e.g. values of the form 16-04-2019). I need to apply some preprocessing to this, like the difference between two dates and extracting the day, month and year from it.
For example, I could need to have the difference in days between 01-04-2019 and 16-04-2019, but this difference could also span days, months or years.
Now, just using Python scripts this is easy to do, but I am wondering if it is also possible to do this with Tensorflow? It's important for my use case to do this within Tensorflow, because the transform needs to be done in the graph format so that I can serve the model with the transformations inside the pipeline.
I am using Tensorflow 1.13.1, Tensorflow Extended and Python 2.7 for this.
Posting from similar issue on tft github.
Here's a way to do it:
import tensorflow_addons as tfa
import tensorflow as tf
from typing import TYPE_CHECKING
#tf.function(experimental_follow_type_hints=True)
def fn_seconds_since_1970(date_time: tf.string, date_format: str = "%Y-%m-%d %H:%M:%S %Z"):
seconds_since_1970 = tfa.text.parse_time(date_time, date_format, output_unit='SECOND')
seconds_since_1970 = tf.cast(seconds_since_1970, dtype=tf.int64)
return seconds_since_1970
string_date_tensor = tf.constant("2022-04-01 11:12:13 UTC")
seconds_since_1970 = fn_seconds_since_1970(string_date_tensor)
seconds_in_hour, hours_in_day = tf.constant(3600, dtype=tf.int64), tf.constant(24, dtype=tf.int64)
hours_since_1970 = seconds_since_1970 / seconds_in_hour
hours_since_1970 = tf.cast(hours_since_1970, tf.int64)
hour_of_day = hours_since_1970 % hours_in_day
days_since_1970 = seconds_since_1970 / (seconds_in_hour * hours_in_day)
days_since_1970 = tf.cast(days_since_1970, tf.int64)
day_of_week = (days_since_1970 + 4) % 7 #Jan 1st 1970 was a Thursday, a 4, Sunday is a 0
print(f"On {string_date_tensor.numpy().decode('utf-8')}, {seconds_since_1970} seconds had elapsed since 1970.")
My two cents on the broader underlying issue, here the question is computing time differences, for which we want to do these computations on tensors. Then the question becomes "What are the units of these tensors?" This is a question of granularity. "The next question is what are the data types involved?" Start with a string likely, end with a numeric. Then the next question becomes is there a "native" tensorflow function that can do this? Enter tensorflow addons!
Just like we are trying to optimize training by doing everything as tensor operations within the graph, similarly we need to optimize "getting to the graph". I have seen the way datetime would work with python functions here, and I would do everything I could do avoid going into python function land as the code becomes so complex and the performance suffers as well. It's a lose-lose in my opinion.
PS - This op is not yet implemented on windows as per this, maybe because it only returns unix timestamps :)
I had a similar problem. The issue because of an if-check with in TFX that doesn't take dates types into account. As far as I've been able to figure out, there are two options:
Preprocess the date column and cast it to an int (e.g. calling toordinal() on each element) field before reading it into TFX
Edit the TFX function that checks types to account for date-like types and cast them to ordinal on the fly.
You can navigate to venv/lib/python3.7/site-packages/tfx/components/example_gen/utils.py and look for the function dict_to_example. You can add a datetime check there like so
def dict_to_example(instance: Dict[Text, Any]) -> tf.train.Example:
"""Converts dict to tf example."""
feature = {}
for key, value in instance.items():
# TODO(jyzhao): support more types.
if isinstance(value, datetime.datetime): # <---- Check here
value = value.toordinal()
if value is None:
feature[key] = tf.train.Feature()
...
value will become an int, and the int will be handled and cast to a Tensorflow type later on in the function.

Get more of the metadata from the Neurotransmitter study using Allen SDK

I am downloading all the images from the Neurotransmitter study of the Allen Brain Atlas using this script:
from allensdk.api.queries.image_download_api import ImageDownloadApi
from allensdk.config.manifest import Manifest
import pandas as pd
import os
#getting transmitter study
#product id from http://api.brain-map.org/api/v2/data/query.json?criteria=model::Product
nt_datasets = image_api.get_section_data_sets_by_product([27])
#an instance of Image Api for downloading
image_api = ImageDownloadApi()
for index, row in df_nt.iterrows():
#get section dataset id
section_dataset_id= row['id']
#each section dataset id has multiple image sections
section_images = pd.DataFrame(
image_api.section_image_query(
section_data_set_id=section_dataset_id)
)
for section_image_id in section_images['id'].tolist():
file_path = os.path.join('/path/to/save/dir/',
str(section_image_id) + '.jpg' )
Manifest.safe_make_parent_dirs(file_path)
image_api.download_section_image(section_image_id,
file_path=file_path,
downsample=downsample)
This script downloads presumably all the available ISH experiments. However, I am wondering what would be the best way to get more of the metadata as follows:
1) type of ISH experiment, known as "gene" (for example whether an image is MBP-stained, Nissl-stained or etc). Shown in red circle below.
2) Structure and correspondence to the atlas image (annotations, for example to see to which part of brain a section belongs to). I think this could be acquired with tree_search but not sure how. Shown in red circles below from two different webpages on Allen website.
3) The scale of the image, for example how big one pixel is in the downloaded image (e.g., 0.001x0.001 mm). I would require this for image analysis with respect to MRI, for example. Shown below in the red circle.
All the above information are somehow available on the website, my question is whether you could help me to do this programmatically via the SDK.
EDIT:
Also would be great to download "Nissl" stains programmatically, as they do not show using the above loop iteration. The picture is shown below.
To access this information, you'll need to formulate a somewhat complex API query.
from allensdk.api.queries.rma_api import RmaApi
api = RmaApi()
data_set_id = 146586401
data = api.model_query('SectionDataSet',
criteria='[id$eq%d]' % data_set_id,
include='section_images,treatments,probes(gene),specimen(structure)')
print("gene symbol: %s" % data[0]['probes'][0]['gene']['acronym'])
print("treatment name: %s" % data[0]['treatments'][0]['name'])
print("specimen location: %s" % data[0]['specimen']['structure']['name'])
print("section xy resolution: %f um" % data[0]['section_images'][0]['resolution'])
gene symbol: MBP
treatment name: ISH
specimen location: Cingulate Cortex
section xy resolution: 1.008000 um
Without doing a deep dive on the API data model, SectionDataSets have constituent SectionImages, Treatments, Probes, and source Specimens. Probes target Genes, and Specimens can be associated with a Structure. The query is downloading all of that information for a single SectionDataSet into a nested dictionary.
I don't remember how to find the specimen block extent. I'll update the answer if I find it.

postgis shape file import problems

Hi I'm trying to import a shape file from
http://www.nyc.gov/html/dcp/html/bytes/bytesarchive.shtml
into a postgis database. the above files creates MULTIPOLYGONS when i import using shp2pgsql.
then i'm trying to simply determine if lat/long points are contained in my multipolygons
however my select's are not working, and when i print out the poitns of my the_geom column it seems to be very broken.
select st_astext(geom) from (select (st_dumppoints(the_geom)).* from nybb where borocode =1) foo;
gives the result...
st_astext
------------------------------------------
POINT(1007193.83859999 257820.786899999)
POINT(1007209.40620001 257829.435100004)
POINT(1007244.8654 257833.326199993)
POINT(1007283.3496 257839.812399998)
POINT(1007299.3502 257851.488900006)
POINT(1007320.1081 257869.218500003)
POINT(1007356.64669999 257891.055800006)
POINT(1007385.6197 257901.432999998)
POINT(1007421.94509999 257894.084000006)
POINT(1007516.85959999 257890.406100005)
POINT(1007582.59110001 257884.7861)
POINT(1007639.02150001 257877.217199996)
POINT(1007701.29170001 257872.893099993)
...
for points in nyc, this is very off.. what am i doing wrong?
The points are not of. The spatial data that is referred to is NOT in lat/long. This is why numbers are different from what you expect. If you need it to be in long/lat it must be reprojected. See more here: http://postgis.refractions.net/news/20020108/
The projection of the data seems to be in the NAD_1983_StatePlane_New_York_Long_Island_FIPS_3104_Feet coordinate system (according to the metadata - see code.).
<spref>
<horizsys>
<planar>
<planci>
<plance Sync="TRUE">coordinate pair</plance>
<coordrep>
<absres Sync="TRUE">0.000000</absres>
<ordres Sync="TRUE">0.000000</ordres>
</coordrep>
<plandu Sync="TRUE">survey feet</plandu>
</planci>
<mapproj><mapprojn Sync="TRUE">Lambert Conformal Conic</mapprojn><lambertc><stdparll Sync="TRUE">40.666667</stdparll><stdparll Sync="TRUE">41.033333</stdparll><longcm Sync="TRUE">-74.000000</longcm><latprjo Sync="TRUE">40.166667</latprjo><feast Sync="TRUE">984250.000000</feast><fnorth Sync="TRUE">0.000000</fnorth></lambertc></mapproj></planar>
<geodetic>
<horizdn Sync="TRUE">North American Datum of 1983</horizdn>
<ellips Sync="TRUE">Geodetic Reference System 80</ellips>
<semiaxis Sync="TRUE">6378137.000000</semiaxis>
<denflat Sync="TRUE">298.257222</denflat>
</geodetic>
<cordsysn>
<geogcsn Sync="TRUE">GCS_North_American_1983</geogcsn>
<projcsn Sync="TRUE">NAD_1983_StatePlane_New_York_Long_Island_FIPS_3104_Feet</projcsn>
</cordsysn>
</horizsys>
</spref>
If you work much with spatial data I suggest that you read more about map projection.
I think this is not issue with PostGIS. I checked input esri Shape file nybb.shp with AvisMap Free Viewer and as you see points are weird itself:
However there is something interesting in nybb.shp.xml metadata file:
<spdom>
<bounding>
<westbc Sync="TRUE">-74.257465</westbc>
<eastbc Sync="TRUE">-73.699450</eastbc>
<northbc Sync="TRUE">40.915808</northbc>
<southbc Sync="TRUE">40.495805</southbc>
</bounding>
<lboundng>
<leftbc Sync="TRUE">913090.770096</leftbc>
<rightbc Sync="TRUE">1067317.219904</rightbc>
<bottombc Sync="TRUE">120053.526313</bottombc>
<topbc Sync="TRUE">272932.050103</topbc>
</lboundng>
</spdom>
I am not familiar with those toolkit (ESRI ArcCatalog), but most probably you need to rescale your points after import using that metadata.