I am bit of a beginner with all this, but I've search around and can't find an answer.
I was to work out the phase of Io, as per this.
Where "The position or phase of Io (γIo) is measured counterclockwise around its orbit from Superior Geocentric Conjunction (SGC)"
I am current using:
moons = ephem.Io()
moons.compute(t)
print moons.x, moons.y, moons.z
Is there a way I can get the phase out of jupmoon.c? I am assuming that it has an elliptical orbit so it isn't as easy as simply working out the coordinates of x,y,z on a circle about the centre of jupiter?
You should be able to use the phase of Jupiter itself as the phase of any of its moons, since Jupiter and its moons are close enough together in the sky — and close enough together in space relative to the position of the Earth — that there should be no observable difference in phase between Jupiter itself and the moons that surround it when viewed from the Earth.
Related
I am working on a simulation, which contains:
a bolt, welded uprightly to the world
a nut, connected to the bolt via ScrewJoint. The mass of a nut set to 0.02 kg, the inertia is a diagonal 1.1e-9 *I. This is configured via a .sdf file.
an iiwa manipulator, which is beside a point for now.
The problem is that the nut is very hard to manipulate and I cannot find a parameter to adjust, which could've made it more lifelike. To be more specific:
I measure the ability of force, applied tangentially in a horizontal plane to the nut, to cause the screwing motion of a joint, that joins the nut with a bolt
I'd like to have greater amount of motion at lower forces, and so far I am failing to achieve that
My interest in doing this is not idle; I am interested in more complicated simulations, which are also failing when iiwa is coming in a contact with this same joint; I've asked about those here and here. (Both answered partially). To sum up those here: when manipulator grips the nut, the nut fights the screwing in such a manner, that the schunk gripper is forced to unclasp and iiwa is thrown off-track, but the nut remains stationary.
I attach below two simpler experiments to better illustrate the issue:
1. Applying tangentially in a horizontal plane 200N force using ExternallyAppliedSpatialForce.
Graph notation: (here as well as below)
The left graph contains linear quantities (m, m/s, etc) along world's Z axis, the right graph contains angular quantities (in degrees, deg, deg/s, etc) around world's Z axis. The legend entries with a trailing apostrophe use the secondary Y-axis scale; other legend entries use the primary Y-axis scale.
Experiment summary:
This works as expected, 200 N is enough to make the nut spin on a bolt, resulting in the nut traveling vertically along the bolt for just under 1 centimeter, and spinning for over 90 degrees. Note: the externally applied force does not show up on my graph.
2. Applying tangentially in a horizontal plane force using iiwa and a simple position controller.
Experiment summary:
The force here is approximately the same as before: 70N along tz, but higher (170N) in tx and ty, though it is applied now only for a brief moment. The nut travels just a few degrees or hundredth fractions of centimeter.
The video of this unsuccessful interaction is below, the contact forces are visualized using ContactVisualizer.
Please advise me on how to make this screw_joint more compliant?
I've tried varying mass and inertia of the nut (different up to the orders of magnitude) in these experiments, this
seems to scale the contact forces, but does not affect acceleration or velocity of the nut after contact.
I like your experiment using ExternallyAppliedSpatialForce to get an idea of scales, though TBH I didn't quite get the details of this setup.
Things that caught my eye though are about scales, which you can estimate with pen and paper:
Your inertia is 1e-9 kg⋅m²?! Judging from your interaction with the iiwa I estimated a radius of 1cm and with that you'd get 2e-6 kg⋅m², three orders of magnitude larger.
A force of 200 N on a 20 grams nut would cause an acceleration of 10000 m/s². As a reference, that's 1000 times the acceleration of Earth's gravity!
Are these numbers correct? Also, if you happen to have fast interactions (do you?), you might want to estimate a time step that makes sense for your application.
Hopefully this helps!
A good thing is that I've fixed it; a bad thing: I don't understand the fix.
Let's restate a problem that I was tacking: a manipulated object reacted quite predictably to the ExternallyAppliedSpatialForce, but couldn't be moved via the contact with the manipulator.
What was done:
I've update drake from 2f340192a9dc79110410faf8a6d54a8615ddca92 (circa 22 Aug, 2022) to 42448c0af1b39f0c46f760e7ae37d77097689ad3 (circa 3 Nov, 2022)
After the update, my experimental setup broke down with assertion Actuation input port for model instance ... must be connected. [Similar to the issue raised in this question.]. My fix was like that:
bolt_n_nut_ = internal::AddAndWeldModelFrom(sdf_path, "nut_and_bolt",
plant_->world_frame(),
"bolt", X_WC, plant_);
then later in ManipulationStation::Finalize:
auto zero_torque = builder.template AddSystem<systems::ConstantVectorSource<double>>(
Eigen::VectorXd::Zero(1));
builder.Connect(zero_torque->get_output_port(),
plant_->get_actuation_input_port(bolt_n_nut_));
With changes above, a manipuland began to interact with the manipulator:
Things to note in graphs:
the distance the manipuland has moved grew from fractions of millimeter up to tens of centimeters. The video presents that a nut became manipulable.
This interaction violates the constraints of the ScrewJoint, i.e. the manipuland moves along it's axis without as much rotation
I'm trying to make a Unity game that allows the user to explore the surface of an Earth shaped spheroid, based on WGS84.
The project so far is on Github, and there's a YouTube video of this behaviour.
A shape the size of Earth is way too big for Unity, so I just spawn tiles near the user, offset so that the first tile is at Unity's origin point. This bit works.
The issue is moving around. I've been using an approach where I get the user's position in ECEF coordinates, then normalise that to provide the global orientation for the player, then I translate the player forward based on that and their rotation.
The issue with this is that normalising the ECEF coordinate means that the player is moving in a spherical shape, but the WGS84 spheroid is not perfectly spherical. So the player sinks into the floor, or flies up if you got south or north, respectively.
My question is, how can I allow the user to move around the surface of the spheroid by way of translation? I feel like there might be some way of taking the major/minor axis of the spheroid into account as the player moves, but I'm not sure how to do that.
I have no experience with Unity or computer graphics, I'm approaching it purely from the navigation point of view.
Let's look at the real world.
We want to travel either by walking/driving on the surface or flying at some altitude. When we do it, we move in the local coordinate system (North-South, East-West, Up-Down), we can't see any curvature. We assume the Earth is flat.
The problem arises when we try to do it on a computer, which is ruthlessly precise and knows the shape of the Earth. We can't assume the Earth is flat, we can't assume the Earth is a sphere. The Earth is a geoid. Fortunately for some purposes we can simplify things and assume the Earth is an ellipsoid. You chose WGS84. Good!
So how to move around an ellipsoid? Solving the problem analitically is a nightmare. We have to cheat ;)
We should assume te Earth is flat for a moment, make a move in a chosen direction in the local coordinate system, write down the altitude of the new position, calculate the global geodetic coordinates (Lat, Long, Alt) of that new point and then replace the altitude with the one obtained while using the local coordinate system. In other words: each time we move forward along a perfectly straight line and diverge from the ellipsoid (just a tiny bit), we force the altitude not to change in relation to the ellipsoid.
Implementation.
You need to be able to freely translate coordinates between geodetic (Lat, Long, Alt) and ECEF. Going from geodetic to ECEF is easy. Finding geodetic coordinates for a given ECEF position is much more complex, there are many different algorithms, I'm sure you should be able to find a ready to use implementation somewhere.
What you also need is Local Tangent Plane, and to be precise, you are going to use NED.
Let's assume your object is initially at some geodetic position. You write down the altitude (relative to the ellipsoid). Then you create a local NED coordinate system with its origin at that point. Then you move the object in that local coordinate system. You write down how much the altitude (or rather the Down coordinate) changed. Then you must calculate the ECEF coordinates of that new position and transform it to geodetic (Lat, Long, Alt). You have the old altitude, you have the altitude change in the NED coordinates, which means you know the new altitude. You then apply that altitude to your new geodetic coordinates (brutally replace the Alt in Lat/Long/Alt with a new value).
Then you make another move in the NED coordinates defined for that new position. And so on...
I'm not sure if it is clear, the process is quite complicated. If you can't understand - shout :)
i'm new to Swift, and now i'm trying to build sky map app like the application "star chart".
i already got a sky map image from NASA and cover it on SCNsphere, also already set camera node in the center of this sphere to make it looks like 360 degrees. Furthermore, i used accelerator to check what direction the camera is looking at.
i know that the sky map like “star chart” doesn't need internet to update data. so now the biggest problem is that i don't know how to correct the position of my sky map according to people's current time and location.
Any good advice and help? Thanks in advance!!! cause i tried vary hard to find some related information but still stuck in here for three weeks.
You just need to rotate your map with time+longitude around Earth's rotation axis and with latitude around axis longitude=90 degrees while earth is placed in the center of your sphere. For stars the offset does not matter so you can ignore Sun-Earth distance and also Earth's radius as well.
The time rotation must be day+year rotations together. On top of that you have to apply precession and nutation if you want to have higher precision.
Of coarse the stars are moving too so if you need really high precision and or high time interval to cover (hundreds or thousands of years) then this approach is not good and you should use stellar catalog with the motions implemented.
For more info see related:
How to draw sky chart
Plotting a star chart efficiently
If you want to use catalog and real colors then you will also need
Star B-V color index to apparent RGB color
simplified atmospheric scattering GLSL shader
And finally here some hints for such applications:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
I am using SpriteKit and I need to calculate the fall time of an object (since it changes depending on the screen size). The problem is that the gravity of the scene is given in m/s^2, but all distances are measured in points.
I have tried to find the conversion between points and meters, but it was not very successful.
Any suggestions in how to deal with it?
You may be able to use something like this (A distance calculator written by another on StackOverflow) to calculate the distance between your node and the point directly below it near the ground or another empty node, then plug that distance into an equation to calculate the time.
Unfortunately, I can't give you any code because I've never tried this myself.
Users can sketch in my app using a very simple tool (move mouse while holding LMB). This results in a series of mousemove events and I record the cursor location at each event. The resulting polyline curve tends to be rather dense, with recorded points almost every other pixel. I'd like to smooth this pixelated polyline, but I don't want to smooth intended kinks. So how do I figure out where the kinks are?
The image shows the recorded trail (red pixels) and the 'implied' shape as a human would understand it. People tend to slow down near corners, so there is usually even more noise here than on the straight bits.
Polyline tracker http://www.freeimagehosting.net/uploads/c83c6b462a.png
What you're describing may be related to gesture recognition techniques, so you could search on them for ideas.
The obvious approach is to apply a curve fit, but that will have the effect of smoothing away all the interesting details and kinks. Another approach suggested is to look at speeds and accelerations, but that can get hairy (direction changes can be very fast or very slow and deliberate)
A fairly basic but effective approach is to simplify the samples directly into a polyline.
For example, work your way through the samples (e.g.) from sample 1 to sample 4, and check if all 4 samples lie within a reasonable error of the straight line between 1 & 4. If they do, then extend this to points 1..5 and repeat until such a time as the straight line from the start point to the end point no longer provides a resonable approximation to the curve defined by those samples. Create a line segment up to the previous sample point and start accumulating a new line segment.
You have to be careful about your thresholds when the samples are too close to each other, so you might want to adjust the sensitivity when regarding samples fewer than 4-5 pixels away from each other.
This will give you a set of straight lines that will follow the original path fairly accurately.
If you require additional smoothing, or want to create a scalable vector graphic, then you can then curve-fit from the polyline. First, identify the kinks (the places in your polyline where the angle between one line and the next is sharp - e.g. anything over 140 degrees is considered a smooth curve, anything less than that is considered a kink) and break the polyline at those discontinuities. Then curve-fit each of these sub-sections of the original gesture to smooth them. This will have the effect of smoothing the smooth stuff and sharpening the kinks. (You could go further and insert small smooth corner fillets instead of these sharp joints to reduce the sharpness of the joins)
Brute force, but it may just achieve what you want.
Rather than trying to do this from the resultant data, have you considered looking at the timing of the data as it comes in? If the mouse stops or slows noticably, you use the trend since the last 'kink' (the last time the mouse slowed) to establish the direction of travel. If the user goes off in a new direction, you call it a kink, otherwise, you ignore the current slowing trend and start waiting for the next one.
Well, one way would be to use a true curve-fitting algorithm. Generate a bezier curve (with exact endpoints, using Catmull-Rom or something similar), then optimize & recursively subdivide (using distance from actual line points as a cost metric). This may be too complicated for your use-case, though.
Record the order the pixels are drawn in. Then, compute the slope between pixels that are "near" but not "close". I'm guessing a graph of the slope between pixel(i) and pixel(i+7) might exhibit easily identifable "jumps" around kinks in the curve.