I'm a beginner in Python and I'm working with the Ansys Customization Tool (ACT) to add my own extension.
Is there a direct way to fill a file with every node's coordinates after deformation?
hopefully in 3 lines or columns: x , y , z
So far I only found the GetNodeValue object but it only gives me the displacement and I need the deformed coordinates for the entire model.
My first idea was to add the displacements to the initial coordinates but I didn't manage to do it.
Many thanks for your help
Lara
APDL Snippet
Add an APDL Snippet in the solution part of the tree:
/prep7
UPGEOM,1,1,1,file,rst ! adds the displacements to the nodal coordinates.
cdwrite,geom,nodesAndelements,geom ! Writes node and element data to nodesAndelement.geom
I'm not sure if you can work with the output format from cdwrite, but this is the quickest solution i can think of.
If you want to automate you have to insert the command snippet via
solution = ExtAPI.DataModel.Project.Model.Analyses[0].Solution
fullPath = "path//to//snippet"
snippet = solution.AddCommandSnippet()
snippet.ImportTextFile(fullPath)
ACT
If you want to stay in ACT it could be done like this:
global nodeResults
import units
analysis = ExtAPI.DataModel.Project.Model.Analyses[0]
mesh = analysis.MeshData
# Get nodes
allNodes = mesh.Nodes
# get the result data
reader = analysis.GetResultsData()
# get the deformation result
myDeformation = reader.GetResult("U")
nodeResultsTemp = []
result_unit = myDeformation.GetComponentInfo("X").Unit
for node in allNodes:
# get node deformation and convert values in meter
deformationNode1 = myDeformation.GetNodeValues(node.Id)
deformationNode1[0] = units.ConvertUnit(deformationNode1[0],result_unit,"m","Length")
deformationNode1[1] = units.ConvertUnit(deformationNode1[1],result_unit,"m","Length")
deformationNode1[2] = units.ConvertUnit(deformationNode1[2],result_unit,"m","Length")
# add node coordinates (in meter) to the displacement
mesh_unit = mesh.Unit
node1 = mesh.NodeById(node.Id)
node1CoorX = units.ConvertUnit(node1.X,mesh_unit,"m","Length")
node1CoorY = units.ConvertUnit(node1.Y,mesh_unit,"m","Length")
node1CoorZ = units.ConvertUnit(node1.Z,mesh_unit,"m","Length")
deformationNode1[0] = deformationNode1[0]+node1CoorX
deformationNode1[1] = deformationNode1[1]+node1CoorY
deformationNode1[2] = deformationNode1[2]+node1CoorZ
nodeResultsTemp.append([node1.X,node1.Y,node1.Z,deformationNode1[0],deformationNode1[1],deformationNode1[2]])
nodeResults = nodeResultsTemp
Related
I'm trying to download precipitation data for a list of latitude-longitude coordinates in R. I've came across this question which gets me most of the way there, but over half of the weather stations don't have precipitation data. I've pasted code below up to this point.
I'm now trying to figure out how to only get data from the closest station with precipitation data, or run a second function on the sites with missing data to get data from the second closest station. However, I haven't been able to figure out how to do this. Any suggestions or resources that might help?
`
library(rnoaa)
# load station data - takes some minutes
station_data <- ghcnd_stations() %>% filter(element == "PRCP")
# add id column for each location (necessary for next function)
sites_df$id <- 1:nrow(sites_df)
# retrieve all stations in radius (e.g. 20km) using lapply
stations <- lapply(1:nrow(sites_df),
function(i) meteo_nearby_stations(sites_df[i,],lat_colname = 'Lattitude',lon_colname = 'Longitude',radius = 20,station_data = station_data)[[1]])
# pull data for nearest stations - x$id[1] selects ID of closest station
stations_data <- lapply(stations,function(x) meteo_pull_monitors(x$id[1], date_min = "2022-05-01", date_max = "2022-05-31", var = c("prcp")))
stations_data`
# poor attempt its making me include- trying to rerun subset for second closest station. I know this isn't working but don't know how to get lapply to run for a subset of a list, or understand exactly how the function is running to code it another way
for (i in c(1,2,3,7,9,10,11,14,16,17,19,20)){
stations_data[[i]] <- lapply(stations,function(x) meteo_pull_monitors(x$id[2], date_min = "2022-05-01", date_max = "2022-05-31", var = c("prcp")))
}
I am looking for some help here with this 3d NMDS code. I have 3 issues.
The layout of the plot moves significantly each time I execute the code.
The sites and species are sometimes far off of the plot.
The species text is often overlapping. How can I fix this?
I am unsure how to change the plotting environment to ggplot, so that might be out of the question.
library(vegan)
library(vegan3d)
library(tidyverse)
data("dune")
SiteID <- 1:20
NMDS = metaMDS(dune,distance="bray", try=500, wascores = TRUE, k=3)
NMDS1 = NMDS$points[,1]
NMDS2 = NMDS$points[,2]
NMDS3 = NMDS$points[,3]
NMDS = data.frame(NMDS1 = NMDS1, NMDS2 = NMDS2, NMDS3 = NMDS3, SiteID=SiteID)
NMDS_input <- metaMDS(dune,distance="bray",try=500,k=3,wascores = T)
pl4 <- with(NMDS, ordiplot3d(NMDS_input, pch=16, angle=50, main="Fish ion level 3", cex.lab=1.7,cex.symbols=1.5, tick.marks=FALSE))
sp <- scores(NMDS_input, choices=1:3, display="species", scaling="symmetric")
si <- scores(NMDS_input, choices=1:3, display="sites", scaling="symmetric")
text(pl4$xyz.convert(sp), rownames(sp), cex=0.7, xpd=TRUE)
sii <- as.data.frame(cbind(NMDS$SiteID,si))
with(NMDS, orditorp(pl4, labels = sii$V1, air=1, cex = 1))
labels must be character variables in orditorp. We always assumed so, but this was not checked in vegan::orditorp. Latest vegan version in github will take care of this and will also work with numeric labels.
ordiplot3d returns projected coordinates (in 2D) and if you want to plot those, you can just use the pl4 object that you saved and you do not need to use pl4$xyz.convert. This object will also be accepted in orditorp.
If you want to plot points that were not used in the original mock-3D plot, you must use pl4$xyz.convert for their 2D projection. This function will return the projected coordinates in a form that is directly accepted by standard R functions text, points (and some others), but they will not be accepted by orditorp (and I won't change this). You must make these into two-column matrix-like object; data.frame() will work.
Your example code contains a lot of un-needed code. The following is an edit with only necessary lines and fixes that make this example work with current vegan release.
library(vegan)
library(vegan3d)
data(dune)
SiteID <- as.character(1:20) # must be character
NMDS_input <- metaMDS(dune,distance="bray",try=500,k=3,wascores = T)
pl4 <- ordiplot3d(NMDS_input, pch=16, angle=50, main="Fish ion level 3", cex.lab=1.7,cex.symbols=1.5, tick.marks=FALSE) # no with(NMDS,...)
sp <- scores(NMDS_input, choices=1:3, display="species") # no arg scaling in scores.metaMDS
text(pl4$xyz.convert(sp), rownames(sp), cex=0.7, xpd=TRUE)
orditorp(pl4, labels = SiteID, air=1, cex = 1) # character labels w/points in the same location
So I have a list of stars and their respective distances. My assignment is to find which stars are in a certain distance (+- 10parsec). I want to exclude some of them from further calculations in the program. The thing is I don't want to remove them completely so remove, pop etc isn't helping me. I still want those stars on the list to be present in my output csv. I just want a line saying something like those stars which don't support the if statement, don't use them in this calculation. So i guess the output would be blank for those.
I suppose it is an if or for statement, to mark those bad stars as False and then down the line use calculation that excludes those faulty stars.
I'm a physics student and this is my first python program ever! Please be cool about my ignorance...
Edit: forgive me if i include useless stuff i don't really know what's important. I also use uncertainties library if its of any use
column_names = ['id','pi','s_pi','v_r' ,'s_v', 'dis', 'X',
'ra_h', 'ra_m', 'ra_s','dec_d', 'dec_m',
'dec_s', 'ma', 's_ma', 'md', 's_md']
data = pd.read_csv("hyades_data.dat", skiprows=2, sep='\s+',
names=column_names)
calculations with all
v_r = unumpy.uarray(data['v_r'], data['s_v'])
ma = unumpy.uarray(data['ma'], data['s_ma'])
md = unumpy.uarray(data['md'], data['s_md'])
mi = unumpy.sqrt(ma**2+md**2)
r_m = v_r*unumpy.tan(th)/(4.74*mi/1000)
diff = np.abs(r_pc - r_m)
'''
if np.abs(dist-46.43) <=10:
r_m=True
else r_m=False
at this point i want to make the distiction
'''
mean_diff = diff.mean()
print("Mean : ")
print(mean_diff)
print(a_ref,d_ref)
df_va=pd.DataFrame(v_r)
df_mi = pd.DataFrame(mi)
df_rm = pd.DataFrame(r_m)
df_rpc = pd.DataFrame(r_pc)
df_diff = pd.DataFrame(diff)
#df_mean_diff = pd.DataFrame(mean_diff)
ve = v_r*np.tan(th)
output = pd.concat([data['id'], ra, dec, th_d, df_mi, df_rm, df_rpc,
df_diff,df_va], axis=1)
output.columns = ['id','ra', 'dec', 'th_d','mi', 'r_m', 'r_pc',
'dist_diff','va']
output.to_csv('results.csv', index=False)
I'm new to OSMnx and Overpass queries in general. I am trying to understand the correct way to write custom queries when working with non-street infrastructure types.
Specifically, I am trying to understand why this query works
import osmnx as ox
my_custom_filter = '["railway"~"disused"]'
G = ox.graph_from_point((51.5073509,-0.1277583),
distance = 10000,
distance_type = 'bbox',
infrastructure = 'way["railway]',
network_type = 'none',
custom_filter = my_custom_filter
)
But this one wields a bad request error:
import osmnx as ox
my_custom_filter = '["railway"~"disused"]'
G = ox.graph_from_point((51.5073509,-0.1277583),
distance = 10000,
distance_type = 'bbox',
infrastructure = 'way["railway~"rail"]',
network_type = 'none',
custom_filter = my_custom_filter
)
Notice the difference is simply that I specifying rail as the type of railway in the latter query.
See the OSM Railway Guide here.
If anyone can point me to any resources which would help me further understand how to construct custom filters - particularly custom filters with more than one filter, that would be excellent also. For example, what would be the correct syntax to add an additional customer filter.
You were just missing a " in your argument. This works:
import osmnx as ox
ox.config(log_console=True, use_cache=True)
point = (51.5073509,-0.1277583)
dist = 10000
dt = 'bbox'
cf = '["railway"~"disused"]'
G = ox.graph_from_point(point, dist=dist, dist_type=dt, custom_filter=cf)
But it produces an EmptyOverpassResponse error as there is nothing that matches your query in that search area. You will get a graph however if you change it to this for example:
cf = '["railway"!~"disused"]'
G = ox.graph_from_point(point, dist=dist, dist_type=dt, custom_filter=cf)
I am trying to estimate a SUR model of the form
y_{1,t} = \alpha_1 +\beta_1 x_{1,t} + \beta_2 x_{2,t} + \beta_3 y_{1,t-1} +\epsilon_{1,t}
y_{2,t} = \alpha_2 +\beta_4 x_{1,t} + \beta_5 x_{2,t} + \beta_6 y_{2,t-1} +\epsilon_{2,t}
Define mY = [y_1 y_2] and mX = [x_1 x_2].
For this purpose I am doing
iT = size(mY,1); iN = size(mY,2);
mXsur = kron(mX, eye(iN));
mXsurCell = mat2cell(mXsur, iN*ones(iT,1));
iR = size(mXsur,2);
Mdl = vgxset('n', iN, 'nAR',1, 'nX',iR,'Constant',true);
[SurOutput, SurSDerror, ~,SURcov] = vgxvarx(Mdl, mY, mXsurCell);
The issue is that the bit of code nAR, 1 seems to add 1 lag of both y variables to each equation and I only wish to add one per equation. Is there a quick way to do this?
(Of course I can include the lagged terms manually in the mX matrix, but my question is whether we can do this via vgxset in a quicker way. I think not based on my reading of the help file, but still want to double check). Thanks