error: lvalue required - lvalue

// getline : empty string array and max length as input
// stores input stream to array and return its length
#include<stdio.h>
#define LENGTH 100
int getline1(char* , int );
int main(){
char *s;
int i;
s=(char*)malloc(LENGTH*sizeof(char));
i=getline1(s,LENGTH);
printf("%s %d",s,i);
return 0;
}
int getline1(char *s, int lim){
int c ,i;
i = 0;
printf("%u",s);
while(--lim >= 0 && (c=getchar()) != EOF && c = '\n'){
*(s+i)=c; //error : lvalue required
i++;
}
if(c=='\n'){
*(s+i)=c;
i++;
}
*(s+i)='\0';
return i;
}
I get the error mentioned in the mentioned line can any body tell whats wrong. code works fine if i use arrays.getline : empty string array and max length as input
stores input stream to array and return its length

It cannot assign the value to the *(s+i) expression. Understandably so since you dereference it, leaving the value at that position (which is a constant).
Try using s[i] instead.

Related

Sending strings to BPF Map Space and printing them out

I have a small txt file that I would like to write to BPF here. Here is what my python code looks like for BPF but I am unable to print out anything as of now. I keep ending up with a Failed to load program: Invalid argument with a bunch of register errors. As of now my string basically says hello, world, hi
BPF_ARRAY(lookupTable, char, 512);
int helloworld2(void *ctx)
{
//print the values in the lookup table
#pragma clang loop unroll(full)
for (int i = 0; i < 512; i++) {
char *key = lookupTable.lookup(&i);
if (key) {
bpf_trace_printk("%s\n", key);
}
}
return 0;
}
Here is the Python code:
b = BPF(src_file="hello.c")
lookupTable = b["lookupTable"]
#add hello.csv to the lookupTable array
f = open("hello.csv","r")
file_contents = f.read()
#append file contents to the lookupTable array
b_string1 = file_contents.encode('utf-8')
b_string1 = ctypes.create_string_buffer(b_string1)
lookupTable[0] = b_string1
f.close()
b.attach_kprobe(event=b.get_syscall_fnname("clone"), fn_name="helloworld2")
b.trace_print()
I have the error linked in this pastebin since it's so long:
BPF Error
One notable error is the mention of infinite loop detected which is something I would need to check out.
The issue is that i is passed by pointer in bpf_map_lookup_elem, so the compiler can't actually unroll the loop (from its point of view, i may not linearly increase).
Using an intermediate variable is enough to fix this:
BPF_ARRAY(lookupTable, char, 512);
#define MAX_LENGTH 1
int helloworld2(void *ctx)
{
//print the values in the lookup table
#pragma clang loop unroll(full)
for (int i = 0; i < 1; i++) {
int k = i;
char *key = lookupTable.lookup(&k);
if (key) {
bpf_trace_printk("%s\n", key);
}
}
return 0;
}

Flutter Parse Method Explanation

I have a String date in format Month-Day-4DigitYear that I want to convert to DateTime in Flutter. I'm a novice coder, and I'm struggling to understand the api.flutter.dev Parse method example.
Below is the example. I just have a few issues. Android Studio throws multiple errors when I just create a class and put in this function. I think I understand the non-nullable issue, so I delete the ! and ? marks everywhere.
My issues are: what are _parseFormat, _brokenDownDateToValue, _withValue ?
All give errors and just declaring the first two and deleting the _withValue doesn't seem to do the trick, although removes all errors. It's like they've left out a key portion that I'm missing or there is a package I need to import the neither I nor Android Studio knows about. Can anyone decrypt this? I get very frustrated with flutter's documentation, as it always seems to give 80% of required info, assuming you already are clairvoyant on all other topics except this single one they are discussing. Gotta be a pro before reading the manual.
// TODO(lrn): restrict incorrect values like 2003-02-29T50:70:80.
// Or not, that may be a breaking change.
static DateTime parse(String formattedString) {
var re = _parseFormat;
Match? match = re.firstMatch(formattedString);
if (match != null) {
int parseIntOrZero(String? matched) {
if (matched == null) return 0;
return int.parse(matched);
}
// Parses fractional second digits of '.(\d+)' into the combined
// microseconds. We only use the first 6 digits because of DateTime
// precision of 999 milliseconds and 999 microseconds.
int parseMilliAndMicroseconds(String? matched) {
if (matched == null) return 0;
int length = matched.length;
assert(length >= 1);
int result = 0;
for (int i = 0; i < 6; i++) {
result *= 10;
if (i < matched.length) {
result += matched.codeUnitAt(i) ^ 0x30;
}
}
return result;
}
int years = int.parse(match[1]!);
int month = int.parse(match[2]!);
int day = int.parse(match[3]!);
int hour = parseIntOrZero(match[4]);
int minute = parseIntOrZero(match[5]);
int second = parseIntOrZero(match[6]);
int milliAndMicroseconds = parseMilliAndMicroseconds(match[7]);
int millisecond =
milliAndMicroseconds ~/ Duration.microsecondsPerMillisecond;
int microsecond = milliAndMicroseconds
.remainder(Duration.microsecondsPerMillisecond) as int;
bool isUtc = false;
if (match[8] != null) {
// timezone part
isUtc = true;
String? tzSign = match[9];
if (tzSign != null) {
// timezone other than 'Z' and 'z'.
int sign = (tzSign == '-') ? -1 : 1;
int hourDifference = int.parse(match[10]!);
int minuteDifference = parseIntOrZero(match[11]);
minuteDifference += 60 * hourDifference;
minute -= sign * minuteDifference;
}
}
int? value = _brokenDownDateToValue(years, month, day, hour, minute,
second, millisecond, microsecond, isUtc);
if (value == null) {
throw FormatException("Time out of range", formattedString);
}
return DateTime._withValue(value, isUtc: isUtc);
} else {
throw FormatException("Invalid date format", formattedString);
}
}
My issues are: what are _parseFormat, _brokenDownDateToValue, _withValue ?
These are objects or functions declared elsewhere in the lib which are private (the _ as the first character declares objects and functions as private) and therefore not shown in the documentation.
_parseFormat seems to be a regular expression.
_brokenDownDateToValue seems to be a function.
_withValue is a named constructor.
I think what you want to use is the following if you want to parse your date String to a DateTime object.
var date = "11-28-2020"; // Month-Day-4DigitYear
var dateTime = DateTime.parse(date.split('-').reversed.join());
See https://api.flutter.dev/flutter/dart-core/DateTime/parse.html for the accepted strings to be parsed.
I did find the full code example here.
It didn't use the name _parseFormat, instead just RegExp? And has _withValue and _brokenDownDateToValue declarations.
As I see it, there isn't a proper way to decode their example. The example is insufficient. A dictionary should not create definitions using words that can't be found elsewhere in the dictionary.

nanopb encode always size 0 (but no encode failure)

I have a very simple proto:
syntax = "proto2";
message TestMessage {
optional int32 val = 1;
optional string msg = 2; // I set max size to 40 in options, so TestMessage_size is defined.
}
...and I have the following code in my main loop for an arduino program:
TestMessage test_msg = TestMessage_init_zero;
test_msg.val = 123;
// Print message length.
size_t msg_length;
bool get_msg_length = pb_get_encoded_size(&msg_length, TestMessage_fields, &test_msg);
Serial.println(msg_length);
// Encode and print message.
uint8_t testbuffer[TestMessage_size];
pb_ostream_t teststream = pb_ostream_from_buffer(testbuffer, sizeof(testbuffer));
bool teststatus = pb_encode(&teststream, TestMessage_fields, &test_msg);
if (!teststatus) {
Serial.println("Failed to encode test message.");
return;
}
Serial.print("Message: ");
Serial.println(teststream.bytes_written);
for(size_t i = 0; i < teststream.bytes_written; i++){
Serial.print(testbuffer[i], OCT);
}
Serial.println("testbuffer flushed");
For some reason I can print test_msg.val and it will show 123 but when I try to encode it (following examples like this one) it always is empty / has size 0.
Is this a configuration issue with nanopb? I wonder if the encode method requires something that I am not using?
For optional fields, you also have to set the has_field:
TestMessage test_msg = TestMessage_init_zero;
test_msg.has_val = true;
test_msg.val = 123;
That's because otherwise there is no way to know if the optional field has been set or not. C++ handles this via setter methods, but C doesn't have those.

Variadic opIndex Override

Following my previous question, I'm now having trouble overriding opIndex with variadic parameters. I've tried multiple methods (even hack-ish ones) but to no avail.
The code I'm using to generate the identifier string
static string array_decl(D...)(string identifier, D dimensions)
{
static if(dimensions.length == 0)
{
return identifier;
}
else
{
return array_decl(identifier ~ "[" ~ to!(string)(dimensions[0]) ~ "]", dimensions[1..$]);
}
}
What my opIndex override looks like:
T opIndex(D...)(D indices)
{
mixin("return " ~ array_decl("Data", indices) ~ ";");
}
Fails with:
./inheritance.d(81): Error: tuple D is used as a type
./inheritance.d(89): Error: template instance inheritance.array_ident!(int, int, int).array_ident.array_ident!(_param_2, _param_3) error instantiating
./inheritance.d(112): instantiated from here: array_ident!(int, int, int)
./inheritance.d(174): instantiated from here: opIndex!(int, int, int)
./inheritance.d(112): Error: CTFE failed because of previous errors in array_ident
./inheritance.d(112): Error: argument to mixin must be a string, not ("return " ~ array_ident("Data", _param_0, _param_1, _param_2) ~ ";") of type string
The question is how (or is it possible) to implement the opIndex operator for this situation.
I think mixins are the way to go since I only have to generate a string with the format of:
type[index0][index1]...[indexN] Data
for the opIndex overload.
Apparently this is not possible as the tuple passed to opIndex is inaccessible at compile-time. A few solutions I came up with (by Adam D. Ruppe's suggestions):
1. Hard-coding Array Access
Using compile-time conditions to index the array, a bit ugly and the amount of dimensions that can be accessed depends on the amount of conditions implemented.
T opIndex(D...)(D indices)
{
static if(indices.length == 1)
{
return Data[indices[0]];
}
static if(indices.length == 2)
{
return Data[indices[0]][indices[1]];
}
static if(indices.length == 3)
{
return Data[indices[0]][indices[1]][indices[2]];
}
static if(indices.length == 4)
{
return Data[indices[0]][indices[1]][indices[2]][indices[3]];
}
}
2. Pointer(s)
The only other method was to cast the array to a pointer then use offsets. The offset is computed and is then used to index the pointer.
To be able to access the template parameters at run-time:
struct Vector_MultiDim(T, D...)
{
enum dimensions = [D];
static const size_t DimCount = D.length;
... Other members here
}
Function to compute offset (the size of each dimension must be known at run-time):
size_t GetIndex(size_t[] indices)
{
size_t index;
for(size_t i = 0; i < DimCount; i++)
{
size_t factor = 1;
for(size_t j = i + 1; j < DimCount; j++)
{
factor *= dimensions[j];
}
index += indices[i] * factor;
}
return index;
}
opIndex Override:
T opIndex(D...)(D indices)
{
T* arr = cast(T*)Data;
return arr[GetIndex([indices])];
}

Transmission of float values over TCP/IP and data corruption

I have an extremely strange bug.
I have two applications that communicate over TCP/IP.
Application A is the server, and application B is the client.
Application A sends a bunch of float values to application B every 100 milliseconds.
The bug is the following: sometimes some of the float values received by application B are not the same as the values transmitted by application A.
Initially, I thought there was a problem with the Ethernet or TCP/IP drivers (some sort of data corruption). I then tested the code in other Windows machines, but the problem persisted.
I then tested the code on Linux (Ubuntu 10.04.1 LTS) and the problem is still there!!!
The values are logged just before they are sent and just after they are received.
The code is pretty straightforward: the message protocol has a 4 byte header like this:
//message header
struct MESSAGE_HEADER {
unsigned short type;
unsigned short length;
};
//orientation message
struct ORIENTATION_MESSAGE : MESSAGE_HEADER
{
float azimuth;
float elevation;
float speed_az;
float speed_elev;
};
//any message
struct MESSAGE : MESSAGE_HEADER {
char buffer[512];
};
//receive specific size of bytes from the socket
static int receive(SOCKET socket, void *buffer, size_t size) {
int r;
do {
r = recv(socket, (char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//send specific size of bytes to a socket
static int send(SOCKET socket, const void *buffer, size_t size) {
int r;
do {
r = send(socket, (const char *)buffer, size, 0);
if (r == 0 || r == SOCKET_ERROR) break;
buffer = (char *)buffer + r;
size -= r;
} while (size);
return r;
}
//get message from socket
static bool receive(SOCKET socket, MESSAGE &msg) {
int r = receive(socket, &msg, sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
if (ntohs(msg.length) == 0) return true;
r = receive(socket, msg.buffer, ntohs(msg.length));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
//send message
static bool send(SOCKET socket, const MESSAGE &msg) {
int r = send(socket, &msg, ntohs(msg.length) + sizeof(MESSAGE_HEADER));
if (r == SOCKET_ERROR || r == 0) return false;
return true;
}
When I receive the message 'orientation', sometimes the 'azimuth' value is different from the one sent by the server!
Shouldn't the data be the same all the time? doesn't TCP/IP guarantee delivery of the data uncorrupted? could it be that an exception in the math co-processor affects the TCP/IP stack? is it a problem that I receive a small number of bytes first (4 bytes) and then the message body?
EDIT:
The problem is in the endianess swapping routine. The following code swaps the endianess of a specific float around, and then swaps it again and prints the bytes:
#include <iostream>
using namespace std;
float ntohf(float f)
{
float r;
unsigned char *s = (unsigned char *)&f;
unsigned char *d = (unsigned char *)&r;
d[0] = s[3];
d[1] = s[2];
d[2] = s[1];
d[3] = s[0];
return r;
}
int main() {
unsigned long l = 3206974079;
float f1 = (float &)l;
float f2 = ntohf(ntohf(f1));
unsigned char *c1 = (unsigned char *)&f1;
unsigned char *c2 = (unsigned char *)&f2;
printf("%02X %02X %02X %02X\n", c1[0], c1[1], c1[2], c1[3]);
printf("%02X %02X %02X %02X\n", c2[0], c2[1], c2[2], c2[3]);
getchar();
return 0;
}
The output is:
7F 8A 26 BF
7F CA 26 BF
I.e. the float assignment probably normalizes the value, producing a different value from the original.
Any input on this is welcomed.
EDIT2:
Thank you all for your replies. It seems the problem is that the swapped float, when returned via the 'return' statement, is pushed in the CPU's floating point stack. The caller then pops the value from the stack, the value is rounded, but it is the swapped float, and therefore the rounding messes up the value.
TCP tries to deliver unaltered bytes, but unless the machines have similar CPU-s and operating-systems, there's no guarantee that the floating-point representation on one system is identical to that on the other. You need a mechanism for ensuring this such as XDR or Google's protobuf.
You're sending binary data over the network, using implementation-defined padding for the struct layout, so this will only work if you're using the same hardware, OS and compiler for both application A and application B.
If that's ok, though, I can't see anything wrong with your code. One potential issue is that you're using ntohs to extract the length of the message and that length is the total length minus the header length, so you need to make sure you setting it properly. It needs to be done as
msg.length = htons(sizeof(ORIENTATION_MESSAGE) - sizeof(MESSAGE_HEADER));
but you don't show the code that sets up the message...