WFP ALE_CONNECT_REDIRECT layer block filter doesn't work - redirect

I am doing some work with WFP and I have the problem with blocking filter on FWPM_LAYER_ALE_CONNECT_REDIRECT_V4 layer. It must block trafic from local ip, but it doesn't. If I change layer to FWPM_LAYER_ALE_AUTH_CONNECT_V4 filter works properly.
So I have several questions:
1) Can I block trafic from specified local ip on FWPM_LAYER_ALE_CONNECT_REDIRECT_V4 layer (code below doesn't work)?
2) Can we create conditions with local_ip(remote_ip) on ale_connect_redirect(or ale_bind_redirect) layers?
UINT32 test_wfp_filter(HANDLE engine_handle,
FWP_V4_ADDR_AND_MASK* source_ip,
UINT8 weight)
{
UINT32 status;
FWPM_FILTER filter = { 0 };
FWPM_FILTER_CONDITION filter_conditions[1] = { 0 };
filter_conditions[0].fieldKey = FWPM_CONDITION_IP_LOCAL_ADDRESS;
filter_conditions[0].matchType = FWP_MATCH_EQUAL;
filter_conditions[0].conditionValue.type = FWP_V4_ADDR_MASK;
filter_conditions[0].conditionValue.v4AddrMask = source_ip;
status = UuidCreate(&(filter.filterKey));
if (status != NO_ERROR)
{
return status;
}
filter.layerKey = FWPM_LAYER_ALE_CONNECT_REDIRECT_V4;
//With this layerKey filter doesn't work,
//but with FWPM_LAYER_ALE_AUTH_CONNECT_V4 filter works properly
filter.displayData.name = L"Blocking filter";
filter.displayData.description = L"Blocks all trafic from current comp";
filter.action.type = FWP_ACTION_BLOCK;
filter.subLayerKey = WFP_TEST_SUBLAYER;
filter.weight.type = FWP_UINT8;
filter.weight.uint8 = weight;
filter.filterCondition = filter_conditions;
filter.numFilterConditions = 1;
status = FwpmFilterAdd(engine_handle, &filter, 0, 0);
return 0;
}
Thank you!

It's not 100% obvious what you are trying to achieve but:
No, the ALE_CONNECT_REDIRECT and ALE_BIND_REDIRECT layers are for modifying source/destination details associated with a flow (prior to establishment), not blocking the flow. An example usage would be writing a local proxy; you might install an ALE_CONNECT_REDIRECT callout which modifies the destination details for an attempted connection such that the connection is actually made to your own application rather than where it was originally intended.
You can definitely use source and destination IP address conditions with ALE_CONNECT_REDIRECT and ALE_BIND_REDIRECT, just remember that these layers are for redirecting not blocking.

Related

x710 VF use dpdk rte_flow_valida() return Function not implemented

OS:CentOS 7.3
DPDK:19.08
I use one X710 NIC, create 2 VFs in kernel driver i40e, and bind vfio-pci driver on VF 0 and Start a DPDK PMD application.
Then I try to create a Flow Rule use rte_flow, but it returns -38, Function not implemented when I called rte_flow_validate().
Does it means this VF doesn't support rte_flow API? or there are some configure or flags need to be set on VF?
DPDK RTE_FLOW are supported on both PF and VF for X710 (Fortville) NIC, with actions like
RTE_FLOW_ACTION_TYPE_QUEUE
RTE_FLOW_ACTION_TYPE_DROP
RTE_FLOW_ACTION_TYPE_PASSTHRU
RTE_FLOW_ACTION_TYPE_MARK
RTE_FLOW_ACTION_TYPE_RSS
The return value -38 for DPDK API is not Function not implemented, but actually I40E_ERR_OPCODE_MISMATCH. This means either Lookup parameters or match cases are improperly configured. Code Snippet that works on X710 VF, shared below
/* configure for 2 RX queues */
struct rte_flow_attr attr = { .ingress = 1 };
struct rte_flow_item pattern[10];
struct rte_flow_action actions[10];
struct rte_flow_item_eth eth;
struct rte_flow_item_eth eth_mask;
struct rte_flow_item_vlan vlan;
struct rte_flow_item_vlan vlan_mask;
struct rte_flow_item_ipv4 ipv4;
struct rte_flow_item_ipv4 ipv4_mask;
struct rte_flow *flow;
struct rte_flow_action_mark mark = { .id = 0xdeadbeef };
struct rte_flow_action_queue queue = { .index = 0x3 };
memset(&pattern, 0, sizeof(pattern));
memset(&actions, 0, sizeof(actions));
memset(&attr, 0, sizeof(attr));
attr.group = 0;
attr.priority = 0;
attr.ingress = 1;
attr.egress = 0;
memset(&eth_mask, 0, sizeof(struct rte_flow_item_eth));
pattern[0].type = RTE_FLOW_ITEM_TYPE_ETH;
pattern[0].spec = ð
pattern[0].last = NULL;
pattern[0].mask = NULL;
memset(&vlan_mask, 0, sizeof(struct rte_flow_item_vlan));
pattern[1].type = RTE_FLOW_ITEM_TYPE_VLAN;
pattern[1].spec = &vlan;
pattern[1].last = NULL;
pattern[1].mask = NULL;
/* set the dst ipv4 packet to the required value */
pattern[1].type = RTE_FLOW_ITEM_TYPE_IPV4;
pattern[1].spec = NULL;
pattern[1].last = NULL;
pattern[1].mask = NULL;
pattern[2].type = RTE_FLOW_ITEM_TYPE_UDP;
pattern[2].spec = NULL;
pattern[2].last = NULL;
pattern[2].mask = NULL;
/* end the pattern array */
pattern[3].type = RTE_FLOW_ITEM_TYPE_END;
/* create the drop action */
actions[0].type = RTE_FLOW_ACTION_TYPE_MARK;
actions[0].conf = &mark;
actions[1].type = RTE_FLOW_ACTION_TYPE_END;
note: request #myzhu in comments to share the actual code snippet to root cause the issue too.

what is the argument cpumaps and maplen in api virDomainGetVcpus of libvirt

I am trying to get the information of vcpus running on my machine and for the same I am using libvirt.
I am not able to understand how to use the api virDomainGetVcpus which has arguments cpumaps and maplen.
I am using C.
Please let me know if you have some insight.
Thanks.
You need to use virDomainGetInfo and virNodeGetInfo to guest the number of guest CPUs and number of host CPUs. Then you can allocate a map of the right size. This code would do the trick:
virNodeInfo nodeinfo;
virDomainInfo dominfo;
int nhostcpus;
if (virNodeGetInfo(conn, &nodeinfo) < 0)
return -1;
nhostcpus = VIR_NODEINFO_MAXCPUS(nodeinfo);
if (virDomainGetInfo(dom, &dominfo) != 0)
return -1;
cpuinfo = malloc(sizeof(virVcpuInfo)*dominfo.nrVirtCpu);
cpumaplen = VIR_CPU_MAPLEN(nhostcpu);
cpumaps = vshMalloc(ctl, dominfo.nrVirtCpu * cpumaplen);
if ((ncpus = virDomainGetVcpus(dom,
cpuinfo, dominfo.nrVirtCpu,
cpumaps, cpumaplen)) < 0)
return -1;

JSTree creating duplicate nodes when loading data with create_node

I'm having an issue when I'm trying to load my initial data for JSTree; I have 2 top level nodes attached to the root node but when I load them it looks like the last node added is being duplicated within JSTree. At first it looked as if it was my fault for not specifically declaring a new object each time but I've fixed that. I'm using .net MVC so the initial data is coming from the model that is passed to my view (that is the data passed into the data parameter of the method).
this.loadInitialData = function (data) {
var tree = self.getTree();
for (var i = 0; i < data.length; i++) {
var node = new Object();
node.id = data[i].Id;
node.parent = data[i].Parent;
node.text = data[i].Text;
node.state = {
opened: data[i].State.Opened,
disabled: data[i].State.Disabled,
selected: data[i].State.Selected
};
node.li_attr = { "node-type": data[i].NodeType };
node.children = [];
for (var j = 0; j < data[i].Children.length; j++) {
var childNode = new Object();
childNode.id = data[i].Children[j].Id;
childNode.parent = data[i].Children[j].Parent;
childNode.text = data[i].Children[j].Text;
childNode.li_attr = { "node-type": data[i].Children[j].NodeType };
childNode.children = data[i].Children[j].HasChildren;
node.children.push(childNode);
}
tree.create_node("#", node, "last");
}
}
My initial code was declaring node like the following:
var node = {
id: data[i].Id
}
I figured that was the cause of what I'm seeing but fixing it has not changed anything. Here is what is happening when I run the application; on the first pass of the method everything looks like it is working just fine.
But after the loop is run for the second (and last) time here is the final result.
It looks like the node objects are just a copy of each other, but when I run the code through the debugger I see the object being initialized each time. Does anyone have an idea what would cause this behavior in JSTree? Should I be using a different method to create my initial nodes besides create_node?
Thanks in advance.
I found the issue; I didn't realize but I was setting my id property to the same id for both node groups. After I fixed it everything started working as expected.

Erasing page on stm32 fails with FLASH_ERROR_WRP

I am trying to erase one page in flash on an STM32F103RB like so:
FLASH_Unlock();
FLASH_ClearFlag(FLASH_FLAG_BSY | FLASH_FLAG_EOP | FLASH_FLAG_PGERR | FLASH_FLAG_WRPRTERR | FLASH_FLAG_OPTERR);
FLASHStatus = FLASH_ErasePage(Page);
However, FLASH_ErasePage fails producing FLASH_ERROR_WRP
Manually enabling/disabling write protection in the stm32-linker tool doesn't fix the problem.
Basically FLASH_ErasePage fails with WRP error without trying to do anything if there's previous WRP error in the status register.
What comes to your FLASH_ClearFlag call, at least FLASH_FLAG_BSY will cause assert_param(IS_FLASH_CLEAR_FLAG(FLASH_FLAG)); to fail (though I'm not really sure what happens in this case).
#define IS_FLASH_CLEAR_FLAG(FLAG) ((((FLAG) & (uint32_t)0xFFFFC0FD) == 0x00000000) && ((FLAG) != 0x00000000))
What is your page address ? Which address are you trying to access ?
For instance, this example is tested on STM32F100C8 in terms of not only erasing but also writing data correctly.
http://www.ozturkibrahim.com/TR/eeprom-emulation-on-stm32/
If using the HAL driver, your code might look like this (cut'n paste from an real project)
static HAL_StatusTypeDef Erase_Main_Program ()
{
FLASH_EraseInitTypeDef ins;
uint32_t sectorerror;
ins.TypeErase = FLASH_TYPEERASE_SECTORS;
ins.Banks = FLASH_BANK_1; /* Do not care, used for mass-erase */
#warning We currently erase from sector 2 (only keep 64KB of flash for boot))
ins.Sector = FLASH_SECTOR_4;
ins.NbSectors = 4;
ins.VoltageRange = FLASH_VOLTAGE_RANGE_3; /* voltage-range defines how big blocks can be erased at the same time */
return HAL_FLASHEx_Erase (&ins, &sectorerror);
}
The internal function in the HAL driver that actually does the work
void FLASH_Erase_Sector(uint32_t Sector, uint8_t VoltageRange)
{
uint32_t tmp_psize = 0U;
/* Check the parameters */
assert_param(IS_FLASH_SECTOR(Sector));
assert_param(IS_VOLTAGERANGE(VoltageRange));
if(VoltageRange == FLASH_VOLTAGE_RANGE_1)
{
tmp_psize = FLASH_PSIZE_BYTE;
}
else if(VoltageRange == FLASH_VOLTAGE_RANGE_2)
{
tmp_psize = FLASH_PSIZE_HALF_WORD;
}
else if(VoltageRange == FLASH_VOLTAGE_RANGE_3)
{
tmp_psize = FLASH_PSIZE_WORD;
}
else
{
tmp_psize = FLASH_PSIZE_DOUBLE_WORD;
}
/* If the previous operation is completed, proceed to erase the sector */
CLEAR_BIT(FLASH->CR, FLASH_CR_PSIZE);
FLASH->CR |= tmp_psize;
CLEAR_BIT(FLASH->CR, FLASH_CR_SNB);
FLASH->CR |= FLASH_CR_SER | (Sector << POSITION_VAL(FLASH_CR_SNB));
FLASH->CR |= FLASH_CR_STRT;
}
Second thing to check. Is interrupts enabled, and is there any hardware access between the unlock call and the erase call?
I hope this helps

Getting problems with sockets and select

I am implementing a socket programming project in C. I am using
select()
for waiting for data from client. I have two UDP sockets and select call is always ignoring one of my sockets. Can anybody briefly describe where should I start looking for it? this is what my server is doing
waitThreshold.tv_sec = 5000;
waitThreshold.tv_usec = 50;
if(sd > sd1)
max_sd = (sd + 1);
else if(sd1 > sd)
max_sd = (sd1 + 1);
FD_ZERO(&read_sds);
FD_SET(sd, &read_sds);
FD_SET(sd1, &read_sds);
ret = select(max_sd, &read_sds, NULL, NULL, &waitThreshold);
if(ret <0)
{
printf("\nSelect thrown an exception\n");
return 0;
}
else if(FD_ISSET(sd, &read_sds))
{
// code for socket one
}
else if(FD_ISSET(sd1, &read_sds))
{
// code for socket two
}
You wrote else if , so just one of them will run.
Generally speaking, when pooling multiple sockets using select() you want to use a for loop instead of branching the code with IFs. Also take note of the fact that select CHANGES the fd_set arguments (the read, write and error file descriptor sets - 2nd, 3rd and 4th arguments) and that you need to re-set them before each select(). A pretty general code layout for selecting sockets that have data to read with multiple concurrent connections would be something like this:
FD_ZERO(&master_sds);
FD_ZERO(&read_sds);
for (i=0; i<number_of_sockets); i++){
FD_SET(sd[i], &master_sds);
if sd[i] > max_sd {
max_sd=sd[i];
}
}
for(;;){
read_sds=master_sds;
ret = select(max_sd, &read_sds, NULL, NULL, &waitThreshold);
if(ret<0){
printf("\nSelect thrown an exception\n");
return 0;
}
for(i=0; i<max_sd; i++){
if FD_ISSET(sd[i], &read_fds){
// code for socket i
}
}
}
You might not want to have an endless loop to pool the sockets for data, you can insert some condition like receiving specific data on one of the sockets or specific user input as an exit condition. Hope this helps.