I'm trying to implement a 4 link mechanism and the model I've put together is shown here:
Revolute1 has a speed of 5 rad/s. All bars are 0.05 m in diameter. World settings are the default and you can see the bar vectors on the picture. Nothing else is changed.
When I try to run the simulation I get these errors on OpenModelica:
[1] 10:29:43 Symbolic Error
[Modelica.Mechanics.MultiBody.Parts: 238:5-238:62]: Model is structurally singular, error found sorting equations
[2] 10:33:25 Translation Error
Internal error Transformation Module PFPlusExt index Reduction Method Pantelides failed!
What am I doing wrong? It seems pretty straight-forward. Thanks.
Apart from the required usage of Modelica.Mechanics.MultiBody.Joints.RevolutePlanarLoopConstraint for the planar four bar linkage also the model topology needs to be adapted, asuming bar2 is fixed to world. See the example model Planar4Bar.
You need to replace one of the Revolute joints with a RevolutePlanarLoopConstraint. Why? Citing the documentation:
If a planar loop is present, e.g., consisting of 4 revolute joints
where the joint axes are all parallel to each other, then there is no
unique mathematical solution if all revolute joints are modelled with
Joints.Revolute and the symbolic algorithms will fail. The reason is
that, e.g., the cut-forces in the revolute joints perpendicular to the
planar loop are not uniquely defined when 3-dim. descriptions of
revolute joints are used
Related
I am using lsqnonlin in MATLAB to align a rigid skeleton model to raw joint center data recorded from a device that does not necessarily yield a rigid skeleton. Therefore, my objective function to be minimized is the distance between model joint locations and raw data joint locations.
My algorithm does a pretty good job (red is the rigid model and blue is the raw data):
except for some select frames where my skeleton is off:
I am interested in examining the solution process graphically so that I can visualize what lsqnonlin is trying to do for every frame. Ideally, I would output a History but this seems to be only available for other solvers.
For lsqnonlin it seems that I need to specify a plotting function for the PlotFcn option in optimoptions. I have extracted from the optimation Toolbox the function optimplotx.m which displays a bar graph of the state variables during the iteration process as lsqnonlin converges. My plan is to tweak this function to reproduce the plot for the red skeleton previously displayed.
My problem is that the quantity to be optimized is not the joint locations themselves, but joint angles. I have another function which does forward kinematics and gives me joint positions from joint angles. I would like to pass model parameters into optimplotx.m so that I may call my forward kinematics function and get the plot done but I am unsure how to do that.
These are the inputs to optimplotx.m:
It seems that somewhere deep in the Optimization Toolbox, MATLAB expects these inputs, and they are not to be changed. It is required that I call #optimplotx anonymously anyway, without passing in arguments. It seems that the only way I might be able to get around this is to make my model parameters global in both my main file and optimplotx. Is there any way around this?
I am following a course on designing a TR3 Manipulator in SolidWorks , exporting an XML file via Solidworks which can be imported via MATLAB into Simscape Multibody and then trying to create a control system for it.
Unfortunately,Simscape has changed over the years and the joint actuator block in Simscape is no longer present.
At this point please look at the 2 pictures(1st one - course , 2nd one - my implementation)
I have tried to connect the multiplexer directly to the revolute joint.Unfortunately, I get the following error
Error in port widths or dimensions. Output port 1 of 'TR3RobotExport/Mux' is a one dimensional vector with 3 elements.
Component:Simulink | Category:Model error
Error in port widths or dimensions. Input port 1 of 'TR3RobotExport/Simulink-PS Converter' is a one dimensional vector with 1 elements.
The screenshot of the course image is not readable.
As for the revolute joint, the input for the revolute joint torque should be scalar. The revolute joint has only one axis of rotation, so a vector would make no sense.
It may be that the provided example used prescribed motion actuator since these need to be bundled into one signal:
I am trying to develop an FOC algorithm (see screenshots attached) using the Motor Control Blockset in Simulink, but I am getting the following errors (see screenshot attached). I don't know what I am doing wrong since I fed my d and q currents to my discrete PI controllers and fed the results straight into my Inverse Park Transform. I don't know how to solve this and any help would be greatly appreciated.
Simulink file OneDrive Link:
https://1drv.ms/u/s!AokIrMv_d8kz3BMaImhVY_guP_N-?e=Fu1a1e
I can't open your model. But from your screenshot I see the following:
The little numbers at the signal arrows denote how many signals are in this path. I think it should be always 1 (in your model), except for the three-phase currents, where it should be 3. So everywhere you find a 2 something is wrong.
The probable mistake is, that you Demux the PhaseCur in the lower right corner into only 2 signal paths. For whatever reason it appears that the Clarke transform only demands for the phases a and b, but not c. But instead of dropping c you feed it into the block together with a (or a and b together, with c aside, for that matter). So instead of a 2-Way-Demux before, use a 3-Way-Demux and get rid of phase c before the Clarke transform.
This is the most apparent mistake until now, try if it works.
The documentation of the Clarke transform block states:
The block accepts two signals out of the three phases (abc), automatically calculates the third signal, and outputs the corresponding components in the αβ reference frame. For example, the block accepts a and b input values where the phase-a axis aligns with the α-axis.
I have been trying to implement a navigation system for a robot that uses an Inertial Measurement Unit (IMU) and camera observations of known landmarks in order to localise itself in its environment. I have chosen the indirect-feedback Kalman Filter (a.k.a. Error-State Kalman Filter, ESKF) to do this. I have also had some success with an Extended KF.
I have read many texts and the two I am using to implement the ESKF are "Quaternion kinematics for the error-state KF" and "A Kalman Filter-based Algorithm for IMU-Camera Calibration" (pay-walled paper, google-able).
I am using the first text because it better describes the structure of the ESKF, and the second because it includes details about the vision measurement model. In my question I will be using the terminology from the first text: 'nominal state', 'error state' and 'true state'; which refer to the IMU integrator, Kalman Filter, and the composition of the two (nominal minus errors).
The diagram below shows the structure of my ESKF implemented in Matlab/Simulink; in case you are not familiar with Simulink I will briefly explain the diagram. The green section is the Nominal State integrator, the blue section is the ESKF, and the red section is the sum of the nominal and error states. The 'RT' blocks are 'Rate Transitions' which can be ignored.
My first question: Is this structure correct?
My second question: How are the error-state equations for the measurement models derived?
In my case I have tried using the measurement model of the second text, but it did not work.
Kind Regards,
Your block diagram combines two indirect methods for bringing IMU data into a KF:
You have an external IMU integrator (in green, labelled "INS", sometimes called the mechanization, and described by you as the "nominal state", but I've also seen it called the "reference state"). This method freely integrates the IMU externally to the KF and is usually chosen so you can do this integration at a different (much higher) rate than the KF predict/update step (the indirect form). Historically I think this was popular because the KF is generally the computationally expensive part.
You have also fed your IMU into the KF block as u, which I am assuming is the "command" input to the KF. This is an alternative to the external integrator. In a direct KF you would treat your IMU data as measurements. In order to do that, the IMU would have to model (position, velocity, and) acceleration and (orientation and) angular velocity: Otherwise there is no possible H such that Hx can produce estimated IMU output terms). If you instead feed your IMU measurements in as a command, your predict step can simply act as an integrator, so you only have to model as far as velocity and orientation.
You should pick only one of those options. I think the second one is easier to understand, but it is closer to a direct Kalman filter, and requires you to predict/update for every IMU sample, rather than at the (I assume) slower camera framerate.
Regarding measurement equations for version (1), in any KF you can only predict things you can know from your state. The KF state in this case is a vector of error terms, and thus you can only predict things like "position error". As a result you need to pre-condition your measurements in z to be position errors. So make your measurement the difference between your "estimated true state" and your position from "noisy camera observations". This exact idea may be represented by the xHat input to the indirect KF. I don't know anything about the MATLAB/Simulink stuff going on there.
Regarding real-world considerations for the summing block (in red) I refer you to another answer about indirect Kalman filters.
Q1) Your SIMULINK model looks to be appropriate. Let me shed some light on quaternion mechanization based KF's which I've worked on for navigation applications.
Since Kalman Filter is an elegant mathematical technique which borrows from the science of stochastics and measurement, it can help you reduce the noise from the system without the need for elaborately modeling the noise.
All KF systems start with some preliminary understanding of the model that you want to make free of noise. The measurements are fed back to evolve the states better (the measurement equation Y = CX). In your case, the states that you are talking about are errors in quartenions which would be the 4 values, dq1, dq2, dq3, dq4.
KF working well in your application would accurately determine the attitude/orientation of the device by controlling the error around the quaternion. The quaternions are spatial orientation of any body, understood using a scalar and a vector, more specifically an angle and an axis.
The error equations that you are talking about are covariances which contribute to Kalman Gain. The covariances denote spread around the mean and they are useful in understanding how the central/ average behavior of the system is changing with time. Low covariances denote less deviation from the mean behavior for any system. As KF cycles run the covariances keep getting smaller.
The Kalman Gain is finally used to compensate for the error between the estimates of the measurements and the actual measurements that are coming in from the camera.
Again, this elegant technique first ensures that the error in the quaternion values converge around zero.
Q2) EKF is a great technique to use as long as you have a non-linear measurement construction technique. Be very careful in using EKF if their are too many transformations in your system, i.e don't try to reconstruct measurements using transformation on your states, this seriously affects the model sanctity and since noise covariances would not undergo similar transformations, there would be a chance of hitting singularity as soon as matrices are non-invertible.
You could look at constant gain KF schemes, which would save you from covariance propagation and save substantial computation effort and time. These techniques are quite new and look very promising. They actively absorb P(error covariance), Q(model noise covariance) and R(measurement noise covariance) and work well with EKF schemes.
I'm simulating a shaft system in Simulink, where I have to find the displacement of a mass. I'm not sure how to model this in Simulink because of the shaft and pulley. I'm looking through the documentation and the closest thing I see to a shaft is the wheel and axle block. But the shafts are connected by a flexible shaft, which is similar to a spring. Any ideas?
This is a fairly trivial task when using SimScape, which is especially made to simulate physical systems. You'll find most of the blocks you need ready from the library.
I've used SimScape to create a model of a complete hybrid truck... In Simulink it can be done, but you'll need to build your own differential equations for the task. In your case, the flexible axle could be translated to another block with a spring/damper system inside.
If you haven't got access to SimScape, you may also consider to use .m (matlab) files to write your differential equations. This can then be used as a block in Simulink, varying (only) a few parameters over time.
Take this step by step:
1. Draw a free body diagram, write out equations for all the forces as a function of displacement, velocity and acceleration of every element (including rotation obviously). For instance, you know that force on the box m will be *c*dy/dt* plus whatever the pulley experiences.
2. Sort out the rotation of the rod first. You know that *T=I*d(omega)/dt* if you get rid of the rest of the system. So, do something analogous to the car engine example of MatLab: Divide the input T by I to get the acceleration, integrate it to get velocity and one more time to get rotational displacement.
3. Keep adding bits one by one. First, you know that there will be a moment proportional to k*(theta_1-theta_2) acting. This will oppose the motion of rod 1 and act to create motion of rod 2. Add a new "branch" to your model to get theta_2 same way you got theta_1.
4. Go on including further elements...