Problem reading Serial Port C#.net 2.0 to get Weighing machine output - windows-xp

I'm trying to read weight from Sartorius Weighing Scale model No BS2202S using the following code in C#.net 2.0 on a Windows XP machine:
public string readWeight()
{
string lastError = "";
string weightData = "";
SerialPort port = new SerialPort();
port.PortName = "COM1";
port.BaudRate = 9600;
port.Parity = Parity.Even;
port.DataBits = 7;
port.StopBits = StopBits.One;
port.Handshake = Handshake.RequestToSend;
try {
port.Open();
weightData = port.ReadExisting();
if(weightData == null || weightData.Length == 0) {
lastError = "Unable to read weight. The data returned form weighing machine is empty or null.";
return lastError;
}
}
catch(TimeoutException) {
lastError = "Operation timed out while reading weight";
return lastError;
}
catch(Exception ex) {
lastError = "The following exception occurred while reading data." + Environment.NewLine + ex.Message;
return lastError;
}
finally {
if(port.IsOpen == true) {
port.Close();
port.Dispose();
}
}
return weightData;
}
I'm able to read the weight using Hyperterminal application (supplied with Windows XP) with the same serial port parameters given above for opening the port. But from the above code snippet, I can open the port and each time it is returning empty data.
I tried opening port using the code given this Stack Overflow thread, still it returns empty data.
Kindly assist me.

I know this is probably old now ... but for future reference ...
Look at the handshaking. There is both hardware handshaking and software handshaking. Your problem could be either - so you need to try both.
For hardware handshaking you can try:
mySerialPort.DtrEnable = True
mySerialPort.RtsEnable = True
Note that
mySerialPort.Handshake = Handshake.RequestToSend
I do not think sets the DTR line which some serial devices might require
Software handshaking is also known as XON/XOFF and can be set with
mySerialPort.Handshake = Handshake.XOnXOff
OR
mySerialPort.Handshake = Handshake.RequestToSendXOnXOff
You may still need to enable DTR
When all else fails - dont forget to check all of these combinations of handshaking.

Since someone else will probably have trouble with this in the future, hand shaking is a selectable option.
In most of the balances you will see the options for Software, Hardware 2 char, Hardware 1 char. The default setting for the Sartorius balances is Hardware 2 Char. I usually recommend changing to Software.
Also if it stops working all together it can often be fixed by defaulting the unit using the 9 1 1 parameter. And then resetting the communication settings.
An example of how to change the settings can be found on the manual on this page:
http://www.dataweigh.com/products/sartorius/cpa-analytical-balances/

Related

How to set the proper data type when writing to an OPC-UA node in Milo?

I am an OPC-UA newbie integrating a non-OPC-UA system to an OPC-UA server using the Milo stack. Part of this includes writing values to Nodes in the OPC-UA server. One of my problems is that values from the other system comes in the form of a Java String and thus needs to be converted to the Node's proper data type. My first brute force proof-of-concept uses the below code in order to create a Variant that I can use to write to the Node (as in the WriteExample.java). The variable value is the Java String containing the data to write, e.g. "123" for an Integer or "32.3" for a Double. The solution now includes hard-coding the "types" from the Identifiers class (org.eclipse.milo.opcua.stack.core, see the switch statement) which is not pretty and I am sure there is a better way to do this?
Also, how do I proceed if I want to convert and write "123" to a node that is, for example,
UInt64?
try {
VariableNode node = client.getAddressSpace().createVariableNode(nodeId);
Object val = new Object();
Object identifier = node.getDataType().get().getIdentifier();
UInteger id = UInteger.valueOf(0);
if(identifier instanceof UInteger) {
id = (UInteger) identifier;
}
System.out.println("getIdentifier: " + node.getDataType().get().getIdentifier());
switch (id.intValue()) {
// Based on the Identifiers class in org.eclipse.milo.opcua.stack.core;
case 11: // Double
val = Double.valueOf(value);
break;
case 6: //Int32
val = Integer.valueOf(value);
break;
}
DataValue data = new DataValue(new Variant(val),StatusCode.GOOD, null);
StatusCode status = client.writeValue(nodeId, data).get();
System.out.println("Wrote DataValue: " + data + " status: " + status);
returnString = status.toString();
} catch (Exception e) {
System.out.println("ERROR: " + e.toString());
}
I've looked at Kevin's response to this thread: How do I reliably write to a OPC UA server? But I'm still a bit lost... Some small code example would really be helpful.
You're not that far off. Every sizable codebase eventually has one or more "TypeUtilities" classes, and you're going to need one here.
There's no getting around that fact that you need to be able to map types in your system to OPC UA types and vice versa.
For unsigned types you'll use the UShort, UInteger, and ULong classes from the org.eclipse.milo.opcua.stack.core.types.builtin.unsigned package. There are convenient static factory methods that make their use a little less verbose:
UShort us = ushort(foo);
UInteger ui = uint(foo);
ULong ul = ulong(foo);
I'll explore that idea of including some kind of type conversion utility for an upcoming release, but even with that, the way OPC UA works you have to know the DataType of a Node to write to it, and in most cases you want to know the ValueRank and ArrayDimensions as well.
You either know these attribute values a priori, obtain them via some other out of band mechanism, or you read them from the server.

How to work out 'read/write' function using the libmodbus?(c code)

I'm gonna to read/write under the modbus-tcp specification.
So, I'm trying to code the client and server in the linux environment.
(I would communicate with the windows program(as a client) using the modbus-tcp.)
but it doesn't work as I want, so I ask you here.
I'm testing the client code for linux as a client and the easymodbus as a server.
I used the libmodbus code.
I'd like to read coil(0x01) and write coil(0x05).
When the code is executed using the libmodbus, 'ff' is printed out from the Unit ID part.(according to the manual, 01 should be output for modbus-tcp.
I don't know why 'ff' is printed(photo attached).
Wrong result:
Expected result:
'[00] [00] .... [00]' ; Do you know where to control this part?
Do you have or do you know the sample code that implements the 'read/write' function using the libmodbus?
please let me know the information, if you know that.
ctx = modbus_new_tcp("192.168.0.99", 502);
modbus_set_debug(ctx, TRUE);
if (modbus_connect(ctx) == -1) {
fprintf(stderr, "Connection failed: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
tab_rq_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rq_bits, 0, nb * sizeof(uint8_t));
tab_rp_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rp_bits, 0, nb * sizeof(uint8_t));
nb_loop = nb_fail = 0;
/* WRITE BIT */
rc = modbus_write_bit(ctx, addr, tab_rq_bits[0]);
if (rc != 1) {
printf("ERROR modbus_write_bit (%d)\n", rc);
printf("Address = %d, value = %d\n", addr, tab_rq_bits[0]);
nb_fail++;
} else {
rc = modbus_read_bits(ctx, addr, 1, tab_rp_bits);
if (rc != 1 || tab_rq_bits[0] != tab_rp_bits[0]) {
printf("ERROR modbus_read_bits single (%d)\n", rc);
printf("address = %d\n", addr);
nb_fail++;
}
}
printf("Test: ");
if (nb_fail)
printf("%d FAILS\n", nb_fail);
else
printf("SUCCESS\n");
free(tab_rq_bits);
free(tab_rp_bits);
/* Close the connection */
modbus_close(ctx);
modbus_free(ctx);
return 0;
That FF you see right before the Modbus function is actually correct. Quoting the Modbus Implementation Guide, page 23:
On TCP/IP, the MODBUS server is addressed using its IP address; therefore, the
MODBUS Unit Identifier is useless. The value 0xFF has to be used.
So libmodbus is just sticking to the Modbus specification. I'm assuming, then, that the problem is in easymodbus, which is apparently expecting you to use 0x01as the unit id in your queries.
I imagine you don't want to mess with easymodbus, so you can fix this problem pretty easily from libmodbus: just change the default unit id:
modbus_set_slave(ctx, 1);
You could also go with:
rc = modbus_set_slave(ctx, MODBUS_BROADCAST_ADDRESS);
ASSERT_TRUE(rc != -1, "Invalid broadcast address");
to make your client address all slaves within the network, if you have more than one.
You have more info and a short explanation of where this problem is coming from in the libmodbus man page for modbus_set_slave function.
For a very comprehensive example, you can check libmodbus unit tests
And regarding your question number 5, I don't know how to answer it, the zeros you mean are supposed to be the states (true or false) you want to write (or read) to the coils. For writing you can change them with the value field of function modbus_write_bit(ctx, address, value).
I'm very grateful for your reply.
I tested the read/write function using the 'unit-test-server/client' code you recommended.
I've reviewed the code, but there are still many things I don't know.
However, there is an address value that acts after testing each other with unit-test-server/client code and there is an address value that does not work
(Do you know why?).
-Checked and found that the UT_BITS_ADDRESS (address value) value operates from 0x130 to 0x150
-'error Illegal data address' occurs at values below -0x130 and above 0x150
-The address I want to read/write is 0x0001 to 0x0004(Do you know how to do?).
I want to know how to process and transmit data like the TX part of the right picture.
enter image description here
I'm running both client and server in my Linux environment and I'm doing read/write testing.
Among the wrong pictures...[06][FF]... <-- I want to know how to modify FF part (to change the value to 01 as shown in the picture)
enter image description here
and "modbus_set_slave" is the function for modbus rtu?
I'd like to communicate PC Program and Linux device in the end.
so Which part do I use that function?
I thanks for your concern again.

Assistant Entities and Different Speakers

It is possible to differentiate among speakers/users with the Watson-Unity-SDK, as it seems to be able to return an array that identifies which words were spoken by which speakers in a multi-person exchange, but I cannot figure out how to execute it, particularly in the case where I am sending different utterances (spoken by different people) to the Assistant service to get a response accordingly.
The code snippets for parsing Assistant's json output/response as well as OnRecognize and OnRecognizeSpeaker and SpeechRecognitionResult and SpeakerLabelsResult are there, but how do I get Watson to return this from the server when an utterance is recognized and its intent is extracted?
Both OnRecognize and OnRecognizeSpeaker are used only once in the Active property, so they are both called, but only OnRecognize does the Speech-to-Text (transcription) and OnRecognizeSpeaker is never fired...
public bool Active
{
get
{
return _service.IsListening;
}
set
{
if (value && !_service.IsListening)
{
_service.RecognizeModel = (string.IsNullOrEmpty(_recognizeModel) ? "en-US_BroadbandModel" : _recognizeModel);
_service.DetectSilence = true;
_service.EnableWordConfidence = true;
_service.EnableTimestamps = true;
_service.SilenceThreshold = 0.01f;
_service.MaxAlternatives = 0;
_service.EnableInterimResults = true;
_service.OnError = OnError;
_service.InactivityTimeout = -1;
_service.ProfanityFilter = false;
_service.SmartFormatting = true;
_service.SpeakerLabels = false;
_service.WordAlternativesThreshold = null;
_service.StartListening(OnRecognize, OnRecognizeSpeaker);
}
else if (!value && _service.IsListening)
{
_service.StopListening();
}
}
}
Typically, the output of Assistant (i.e. its result) is something like the following:
Response: {"intents":[{"intent":"General_Greetings","confidence":0.9962662220001222}],"entities":[],"input":{"text":"hello eva"},"output":{"generic":[{"response_type":"text","text":"Hey!"}],"text":["Hey!"],"nodes_visited":["node_1_1545671354384"],"log_messages":[]},"context":{"conversation_id":"f922f2f0-0c71-4188-9331-09975f82255a","system":{"initialized":true,"dialog_stack":[{"dialog_node":"root"}],"dialog_turn_counter":1,"dialog_request_counter":1,"_node_output_map":{"node_1_1545671354384":{"0":[0,0,1]}},"branch_exited":true,"branch_exited_reason":"completed"}}}
I have set up intents and entities, and this list is returned by the Assistant service, but I am not sure how to get it to also consider my entities or how to get it to respond accordingly when the STT recognizes different speakers.
I would appreciate some help, particularly how to do this via Unity scripting.
I had the exact same question about dealing with the Assistant's messages, so I looked at the Assistant.OnMessage() method that returns a string like “Response: {0}”, customData[“json”].ToString() plus the JSON output that will be something like this:
[Assistant.OnMessage()][DEBUG] Response: {“intents”:[{“intent”:”General_Greetings”,”confidence”:1}],”entities”:[],”input”:{“text”:”hello”},”output”:{“text”:[“good evening”],”nodes_visited”: etc...}
I personally parse the JSON in order to extract the content from messageResponse.Entities. In the above example, you can see that that the array is empty, but if you are populating it, then that’s where you need to extract the values from and then in your code you can do what you want.
Regarding the different speaker recognition, in the Active property whose code you have included, the _service.StartListening(OnRecognize, OnRecognizeSpeaker) line takes care of both, so perhaps put some Debug.Log statements inside their code blocks to see if they are called or not.
Please set SpeakerLabels to True
_service.SpeakerLabels = true;

WinVerifyTrust returns CERT_E_UNTRUSTEDROOT for a valid (loaded) driver

In the following code snippet, WinVerifyTrust returns CERT_E_UNTRUSTEDROOT for a kernel driver file (.sys) that is loaded and running on the system:
GUID guidAction = DRIVER_ACTION_VERIFY;
WINTRUST_FILE_INFO sWintrustFileInfo = { 0 };
WINTRUST_DATA sWintrustData = { 0 };
HRESULT hr = 0;
sWintrustFileInfo.cbStruct = sizeof(WINTRUST_FILE_INFO);
sWintrustFileInfo.pcwszFilePath = argv[1];
sWintrustFileInfo.hFile = NULL;
sWintrustData.cbStruct = sizeof(WINTRUST_DATA);
sWintrustData.dwUIChoice = WTD_UI_NONE;
sWintrustData.fdwRevocationChecks = WTD_REVOKE_NONE;
sWintrustData.dwUnionChoice = WTD_CHOICE_FILE;
sWintrustData.pFile = &sWintrustFileInfo;
sWintrustData.dwStateAction = WTD_STATEACTION_VERIFY;
hr = WinVerifyTrust((HWND)INVALID_HANDLE_VALUE, &guidAction, &sWintrustData);
A few interesting points:
- The driver is signed with a valid (purchased) certificate using SHA-256.
- KB3033929 is installed on the system (Win7/32)
- When viewing the certificate from the file properties, the entire certification chain shows up as valid
Am I calling WinVerifyTrust wrong?
Alternative question: is there another way of knowing (by the presence of a registry key or something similar) that SHA-256 based code signing verification is available on the target system? (I need to verify this during installation...)
Thanks :)
DRIVER_ACTION works good for WHQL afaik. Try
GUID WINTRUST_ACTION_GENERIC_VERIFY_V2
Here is something else you can refer to
http://gnomicbits.blogspot.in/2016/03/how-to-verify-pe-digital-signature.html

Error reading from an IP-Camera

I capture the image from an IP-Camera and I work with the frames. My programm reads when there is a movement, and then, it makes a photo and save it on the computer.
It works perfectly at first, but when it is running like 2-3 hours, it usually get an error, and I do not find a explanation for this. Because, if it is an error on getting the image or the processing, it should happens since first, shouldn't it?
The error I get is the next:
Exception in thread "main" java.lang.NullPointerException
at com.googlecode.javacv.IPCameraFrameGrabber.grab(IPCameraFrameGrabber.java:105)
at Llamada.main(Llamada.java:34)
I have looked for the error nº105 but I have not found anything.
The program is the next:
public class Llamada {
public static void main(String[] args) throws Exception {
IPCameraFrameGrabber grabber = new IPCameraFrameGrabber("http://192.168.2.102:80/mjpg/video.mjpg");
//OpenCVFrameGrabber grabber = new OpenCVFrameGrabber(0);
grabber.start();
IplImage frame = grabber.grab();
IplImage image = null;
IplImage prevImage = null;
IplImage diff = null;
Date data = new Date();
String output = "";
int i=0, j=0;
CanvasFrame canvasFrame = new CanvasFrame("IP Camera");
canvasFrame.setCanvasSize(frame.width(), frame.height());
CvMemStorage storage = CvMemStorage.create();
while (canvasFrame.isVisible() && (frame = grabber.grab()) != null) {
cvSmooth(frame, frame, CV_GAUSSIAN, 9, 9, 2, 2);
if (image == null) {
image = IplImage.create(frame.width(), frame.height(), IPL_DEPTH_8U, 1);
cvCvtColor(frame, image, CV_RGB2GRAY);
} else {
prevImage = IplImage.create(frame.width(), frame.height(), IPL_DEPTH_8U, 1);
prevImage = image;
image = IplImage.create(frame.width(), frame.height(), IPL_DEPTH_8U, 1);
cvCvtColor(frame, image, CV_RGB2GRAY);
}
if (diff == null) {
diff = IplImage.create(frame.width(), frame.height(), IPL_DEPTH_8U, 1);
}
if (prevImage != null) {
// perform ABS difference
cvAbsDiff(image, prevImage, diff);
// do some threshold for wipe away useless details
cvThreshold(diff, diff, 64, 255, CV_THRESH_BINARY);
canvasFrame.showImage(diff);
// recognize contours
CvSeq contour = new CvSeq(null);
cvFindContours(diff, storage, contour, Loader.sizeof(CvContour.class), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE);
while (contour != null && !contour.isNull()) {
if (contour.elem_size() > 0) {
output = data.toString();
if (data != null)
output = output.substring(0,10);
if(i%300 == 0)
cvSaveImage((j++)+" "+ output +"-capture.jpg", frame);
CvBox2D box = cvMinAreaRect2(contour, storage);
// test intersection
if (box != null) {
CvPoint2D32f center = box.center();
CvSize2D32f size = box.size();
}
i++;
}
contour = contour.h_next();
}
}
}
grabber.stop();
canvasFrame.dispose();
}
}
Thank you for everything!
Have you tried using a debugger and setting a break point? I understand that waiting around for 2-3 hours isn't fun, but maybe it'd help you get a handle on what's going on.
That seems to be in your while loop's second conditional part. Something inside the method grab on the grabber object is throwing a NullPointerException.
Probably the way you've initialized the grabber has led it to do this.
And it would be useful to know which version of the IPCameraFrameGrabber class you're using and what the author of that class really expected. Namely it's initialized to respond to a particular camera's url. In reading the class, it would appear this makes no claim to work with all IP cameras' MJPEG streams.
Let's look at one example comment in there:
foscam url http://host/videostream.cgi?user=username&pwd=password
http://192.168.0.59:60/videostream.cgi?user=admin&pwd=password
android ipcam http://192.168.0.57:8080/videofeed
And compare that to your url:
http://192.168.2.102:80/mjpg/video.mjpg
I gather it is not a foscam videostream.cgi url nor an android ipcam videofeed url, which would appear to be the only tested urls. It reminds me of an Axis camera url. More on that later.
In a recent version of that class (also in the older one actually), there seems to be some hackish attempt at reading only to the end of a subheader that is always delimited by crlfcrlf which could have been done just as well with a buffered input reader reading lines until it gets an empty line. What I do see here that seems likely to cause an npe is:
When your url's http server's response does not contain the content-length header, which is quite possible, the returned readImage() byte[] is null.
Since javax.imageio.ImageIO specifies that it will throw an IllegalArgumentException when it gets a null input, I'm guessing it's the ByteArrayInputStream constructor in the grabBufferedImage method that's throwing this, the IplImage.createFrom(null) in the old version, or the b.length in the newer version that is.
None of the line numbers of these versions line up with the error message you've shown that you're getting, so maybe your version of the library is yet again different, and broken differently. Try using the debugger, edit and patch the source of the IPCameraFrameGrabber to better support your mjpeg over http "device" based on what you find out is really in the input stream of the http response.
Since the url format reminds me of an Axis camera, I tried this with one running firmware v5.50 with the boa server built in:
$ curl -I http://user:pass#10.10.10.10:8080/mjpg/video.mjpg
HTTP/1.0 200 OK
Cache-Control: no-cache
Pragma: no-cache
Expires: Thu, 01 Dec 1994 16:00:00 GMT
Connection: close
Content-Type: multipart/x-mixed-replace; boundary=myboundary
So you can see the content length is missing there. However, you do say you're getting frames initially for hours, then then, so I'm kind of at a loss with that part. I mean it sounds as though EITHER the input stream is getting closed, or the java implementation wrapping the stream, implemented in the http protocol handler, runs out of some kind of total space or open connection timer for some reason. I know this seems vague.
Another thing that seems odd is that from what I read in the two example classes of IPCameraFrameGrabber linked, every call to grab reads the input stream looking for headers first, which doesn't make sense to me right now, and I feel as though I must be misreading that.