I change the reagin and time setting when my vmware is running, and then I close the vmware restart the computer, "Ditionary problem" is been shown when I power on my ubuntu virtual machine. However I have nothing to do with my vmware workstation, I just change the windows date and region and restart it.
enter image description here
I use compare tool to compare .vmx file and change the config
.encoding = "GBK"
config.version = "8"
virtualHW.version = "16"
mks.enable3d = "TRUE"
pciBridge0.present = "TRUE"
pciBridge4.present = "TRUE"
pciBridge4.virtualDev = "pcieRootPort"
pciBridge4.functions = "8"
pciBridge5.present = "TRUE"
pciBridge5.virtualDev = "pcieRootPort"
pciBridge5.functions = "8"
pciBridge6.present = "TRUE"
pciBridge6.virtualDev = "pcieRootPort"
pciBridge6.functions = "8"
pciBridge7.present = "TRUE"
pciBridge7.virtualDev = "pcieRootPort"
pciBridge7.functions = "8"
vmci0.present = "TRUE"
hpet0.present = "TRUE"
vvtd.enable = "TRUE"
displayName = "Ubuntu 64 works"
guestOS = "ubuntu-64"
nvram = "Ubuntu 64 λ.nvram"
virtualHW.productCompatibility = "hosted"
powerType.powerOff = "soft"
powerType.powerOn = "soft"
powerType.suspend = "soft"
powerType.reset = "soft"
vhv.enable = "TRUE"
vpmc.enable = "TRUE"
tools.syncTime = "FALSE"
sound.autoDetect = "TRUE"
sound.fileName = "-1"
sound.present = "TRUE"
numvcpus = "8"
cpuid.coresPerSocket = "8"
vcpu.hotadd = "TRUE"
memsize = "8192"
mem.hotadd = "TRUE"
scsi0.virtualDev = "lsilogic"
scsi0.present = "TRUE"
sata0.present = "TRUE"
scsi0:0.fileName = "Ubuntu 64 λ.vmdk"
scsi0:0.present = "TRUE"
sata0:1.deviceType = "cdrom-raw"
sata0:1.fileName = "auto detect"
sata0:1.present = "TRUE"
usb.present = "TRUE"
ehci.present = "TRUE"
usb_xhci.present = "TRUE"
usb.generic.allowHID = "TRUE"
svga.graphicsMemoryKB = "786432"
ethernet0.addressType = "generated"
ethernet0.virtualDev = "e1000"
ethernet0.linkStatePropagation.enable = "TRUE"
serial0.fileType = "thinprint"
serial0.fileName = "thinprint"
ethernet0.present = "TRUE"
serial0.present = "TRUE"
extendedConfigFile = "Ubuntu 64 λ.vmxf"
uuid.bios = "56 4d 2d 2f 03 4f 8b 31-bb ab 41 59 d7 bf 6c 82"
uuid.location = "56 4d b7 59 0f 4e a8 56-10 83 f2 c1 bb 2a f4 a2"
scsi0:0.redo = ""
pciBridge0.pciSlotNumber = "17"
pciBridge4.pciSlotNumber = "21"
pciBridge5.pciSlotNumber = "22"
pciBridge6.pciSlotNumber = "23"
pciBridge7.pciSlotNumber = "24"
scsi0.pciSlotNumber = "16"
usb.pciSlotNumber = "32"
ethernet0.pciSlotNumber = "33"
sound.pciSlotNumber = "34"
ehci.pciSlotNumber = "35"
usb_xhci.pciSlotNumber = "160"
vmci0.pciSlotNumber = "36"
sata0.pciSlotNumber = "37"
vmotion.checkpointFBSize = "4194304"
vmotion.checkpointSVGAPrimarySize = "268435456"
ethernet0.generatedAddress = "00:0c:29:bf:6c:82"
ethernet0.generatedAddressOffset = "0"
vmci0.id = "-675320701"
monitor.phys_bits_used = "43"
cleanShutdown = "FALSE"
softPowerOff = "FALSE"
svga.guestBackedPrimaryAware = "TRUE"
sata0:1.autodetect = "TRUE"
tools.remindInstall = "FALSE"
toolsInstallManager.updateCounter = "1"
toolsInstallManager.lastInstallError = "0"
gui.lastPoweredViewMode = "fullscreen"
gui.stretchGuestMode = "aspect"
gui.enableStretchGuest = "FALSE"
usb_xhci:6.speed = "2"
usb_xhci:6.present = "TRUE"
usb_xhci:6.deviceType = "hub"
usb_xhci:6.port = "6"
usb_xhci:6.parent = "-1"
usb_xhci:7.speed = "4"
usb_xhci:7.present = "TRUE"
usb_xhci:7.deviceType = "hub"
usb_xhci:7.port = "7"
usb_xhci:7.parent = "-1"
sata0:1.startConnected = "FALSE"
scsi0:1.deviceType = "rawDisk"
scsi0:1.fileName = "Ubuntu 64 λ-0.vmdk"
scsi0:1.redo = ""
scsi0:1.mode = "independent-persistent"
scsi0:1.present = "TRUE"
workingDir = "Ubuntu 64 λ.vmx.lck"
sata0:0.present = "FALSE"
floppy0.present = "FALSE"
usb_xhci:4.present = "TRUE"
usb_xhci:4.deviceType = "hid"
usb_xhci:4.port = "4"
usb_xhci:4.parent = "-1"
Related
In the current terraform pipeline, I am passing topics as a list
locals {
test_topics = [
{
name = "topic1"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 3
},
{
name = "topic2"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 4
},
{
name = "topic3"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 5
},
{
name = "topic4"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
},
{
name = "topic5"
is_public = true
version = 1
is_cleanup_policy_compact = true
max_message_bytes = "-1"
partition_count = 5
}
]
}
# example create topic it automatically assigns READ WRITE access to the service account and READ access to all PUBLIC topics
module "test_topics" {
source = "../kafka_topic"
topics = "${local.test_topics}"
environment = var.environment
data_domain = var.data_domain
service_account = var.service_account
}
and declaring variables in child modules like below
variable "topics" {
type = list(object({
name = string
is_public = bool
is_cleanup_policy_compact = bool
version = number
max_message_bytes = number
partition_count = number
}))
description = "list of topics with their configuration"
default = null
}
and in child main.tf we are creating the topics using following code
resource "kafka_topic" "topic" {
count = length(var.topics)
name = "${lookup(var.topics[count.index], "is_public") ? "public" :"private"}_${var.environment}_${var.data_domain}_${lookup(var.topics[count.index], "name")}_${lookup(var.topics[count.index], "version")}"
partitions = lookup(var.topics[count.index], "partition_count") == null ? 6 : "${lookup(var.topics[count.index], "partition_count")}"
replication_factor = 3
config = {
"cleanup.policy" = lookup(var.topics[count.index], "is_cleanup_policy_compact") ? "compact" : "delete"
"max.message.bytes" = lookup(var.topics[count.index], "max_message_bytes") != -1 ? "${lookup(var.topics[count.index], "max_message_bytes")}" : 1000012
}
}
but when running terraform plan I am getting following exception
attribute "partition_count" is required.
Note : I also used partition_count = optional(number) in declaring the variable in variable.tf (to keep that attribute as a optional field) but getting following error
Keyword "optional" is not a valid type constructor
as it might be due to the terraform version currently I am using which is ">= 0.12" but when I tried with the ">= 0.15" version, got the same error 'Keyword "optional" is not a valid type constructor' error.
Is there any way I can fix this issue?
Try to add this:
terraform {
experiments = [module_variable_optional_attrs]
}
I'm writing a model in NuSMV. However, there is a problem. When I try to simulate the model in using the interactive mode using ./NuSMV -int, it stucks on a state saying that there is not any further reachable states. But, this should not be true according the way I write the model and the transition from that state. For more information I put here the entire model, the execution trace and the entire last state of the trace.
For the model I put the link of pastebin Model. The intereseted part of the model is the MODULE PI and the transition from the pc = 53 to the pc = 54 in the TRANS section.
This is the trace of the simulation
Trace Type: Simulation
-> State: 13.1 <-
ca.x1 = None
ca.x2 = None
ca.PK = None
ca.x3 = None
ca.len = 0
cb.x1 = None
cb.PK = None
cb.x2 = None
cb.len = 0
IniCommitAB = FALSE
IniRunningAB = FALSE
ResRunningAB = FALSE
ResCommitAB = FALSE
p_initial.PIni_process1.slef = None
p_initial.PIni_process1.party = None
p_initial.PIni_process1.nonce = None
p_initial.PIni_process1.runable = FALSE
p_initial.PIni_process1.g1 = None
p_initial.PIni_process1.pc = 1
p_initial.PIni_process2.slef = None
p_initial.PIni_process2.party = None
p_initial.PIni_process2.nonce = None
p_initial.PIni_process2.runable = FALSE
p_initial.PIni_process2.g1 = None
p_initial.PIni_process2.pc = 1
p_initial.PRes_process.slef = None
p_initial.PRes_process.nonce = None
p_initial.PRes_process.g2 = None
p_initial.PRes_process.g3 = None
p_initial.PRes_process.runable = FALSE
p_initial.PRes_process.pc = 1
p_initial.PI_process.kNa = FALSE
p_initial.PI_process.kNb = FALSE
p_initial.PI_process.k_Na_Nb__A = FALSE
p_initial.PI_process.k_Na_A__B = FALSE
p_initial.PI_process.k_Nb__B = FALSE
p_initial.PI_process.x1 = None
p_initial.PI_process.x2 = None
p_initial.PI_process.x3 = None
p_initial.PI_process.pc = 1
p_initial.PI_process.runable = FALSE
p_initial.pc = 1
cb_is_empty = TRUE
ca_is_empty = TRUE
p_initial.PI_process.x3_I = FALSE
p_initial.PI_process.check = TRUE
-> Input: 13.2 <-
_process_selector_ = p_initial
running = FALSE
p_initial.running = TRUE
p_initial.PI_process.running = FALSE
p_initial.PRes_process.running = FALSE
p_initial.PIni_process2.running = FALSE
p_initial.PIni_process1.running = FALSE
cb.running = FALSE
ca.running = FALSE
-> State: 13.2 <-
p_initial.pc = 2
-> Input: 13.3 <-
-> State: 13.3 <-
p_initial.PIni_process1.slef = A1
p_initial.PIni_process1.party = I
p_initial.PIni_process1.nonce = Na
p_initial.PIni_process1.runable = TRUE
p_initial.pc = 4
-> Input: 13.4 <-
-> State: 13.4 <-
p_initial.PRes_process.slef = B
p_initial.PRes_process.nonce = Nb
p_initial.PRes_process.runable = TRUE
p_initial.pc = 5
-> Input: 13.5 <-
-> State: 13.5 <-
p_initial.PI_process.runable = TRUE
p_initial.pc = 6
-> Input: 13.6 <-
_process_selector_ = p_initial.PIni_process1
p_initial.running = FALSE
p_initial.PIni_process1.running = TRUE
-> State: 13.6 <-
p_initial.PIni_process1.pc = 4
-> Input: 13.7 <-
-> State: 13.7 <-
p_initial.PIni_process1.pc = 5
-> Input: 13.8 <-
-> State: 13.8 <-
p_initial.PIni_process1.pc = 6
-> Input: 13.9 <-
-> State: 13.9 <-
ca.x1 = A1
ca.x2 = Na
ca.PK = A1
ca.x3 = I
ca.len = 1
p_initial.PIni_process1.pc = 7
ca_is_empty = FALSE
-> Input: 13.10 <-
_process_selector_ = p_initial.PI_process
p_initial.PI_process.running = TRUE
p_initial.PIni_process1.running = FALSE
-> State: 13.10 <-
p_initial.PI_process.pc = 52
-> Input: 13.11 <-
-> State: 13.11 <-
ca.x1 = None
ca.x2 = None
ca.PK = None
ca.x3 = None
ca.len = 0
p_initial.PI_process.x1 = Na
p_initial.PI_process.x2 = A1
p_initial.PI_process.x3 = I
p_initial.PI_process.pc = 53
ca_is_empty = TRUE
p_initial.PI_process.x3_I = TRUE
While this is the entire last state of the execution
ca.x1 = None
ca.x2 = None
ca.PK = None
ca.x3 = None
ca.len = 0
cb.x1 = None
cb.PK = None
cb.x2 = None
cb.len = 0
IniCommitAB = FALSE
IniRunningAB = FALSE
ResRunningAB = FALSE
ResCommitAB = FALSE
p_initial.PIni_process1.slef = A1
p_initial.PIni_process1.party = I
p_initial.PIni_process1.nonce = Na
p_initial.PIni_process1.runable = TRUE
p_initial.PIni_process1.g1 = None
p_initial.PIni_process1.pc = 7
p_initial.PIni_process2.slef = None
p_initial.PIni_process2.party = None
p_initial.PIni_process2.nonce = None
p_initial.PIni_process2.runable = FALSE
p_initial.PIni_process2.g1 = None
p_initial.PIni_process2.pc = 1
p_initial.PRes_process.slef = B
p_initial.PRes_process.nonce = Nb
p_initial.PRes_process.g2 = None
p_initial.PRes_process.g3 = None
p_initial.PRes_process.runable = TRUE
p_initial.PRes_process.pc = 1
p_initial.PI_process.kNa = FALSE
p_initial.PI_process.kNb = FALSE
p_initial.PI_process.k_Na_Nb__A = FALSE
p_initial.PI_process.k_Na_A__B = FALSE
p_initial.PI_process.k_Nb__B = FALSE
p_initial.PI_process.x1 = Na
p_initial.PI_process.x2 = A1
p_initial.PI_process.x3 = I
p_initial.PI_process.pc = 53
p_initial.PI_process.runable = TRUE
p_initial.pc = 6
cb_is_empty = TRUE
ca_is_empty = TRUE
p_initial.PI_process.x3_I = TRUE
The problem is that NuSMV stucks on this state. I don't know why, since the condition to step in the next state is that (x3 = I) that is true, in fact p_initial.PI_process.x3_I = TRUE. Moreover, for every simulation it stops at most at the 11th state. These are the command that I give to execute
read_model -i <model>
flatten_hierarchy
encode_variables
build_model
pick_state -r
simulate -r -p -k 12
I know that it is not a simple problem. But I need help, I'm stucked on this problem since weeks.
I'm trying to spin up an Aurora Postgres Cluster and I can't seem to make it available over the internet. I'm using Terraform to code the infrastructure.
I've created a security group to allow external access and that is attached to the VPC's subnets used by the Cluster. Still, I can't seem to be able to access the endpoints from my local machine.
I can't figured out what I'm missing.
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = ">=3.11.0"
name = "vpc-auroradb-${var.environment}"
cidr = var.vpc_cidr_block
azs = var.availability_zones
private_subnets = var.vpc_private_subnets
public_subnets = var.vpc_public_subnets
database_subnets = var.vpc_database_subnets
enable_nat_gateway = true
enable_dns_hostnames = true
enable_dns_support = true
create_igw = true
create_database_internet_gateway_route = true
create_database_nat_gateway_route = true
create_database_subnet_group = true
create_database_subnet_route_table = true
}
module "aurora_cluster" {
source = "terraform-aws-modules/rds-aurora/aws"
version = ">=6.1.3"
name = "bambi-${var.environment}"
engine = "aurora-postgresql"
engine_version = "12.8"
instance_class = "db.t4g.large"
publicly_accessible = true
instances = {
1 = {
identifier = "bambi-1"
}
2 = {
identifier = "bambi-2"
}
}
autoscaling_enabled = true
autoscaling_min_capacity = 2
autoscaling_max_capacity = 3
vpc_id = module.vpc.vpc_id
db_subnet_group_name = module.vpc.database_subnet_group_name
create_db_subnet_group = false
create_security_group = false
iam_database_authentication_enabled = true
storage_encrypted = true
apply_immediately = true
monitoring_interval = 30
db_parameter_group_name = aws_db_parameter_group.parameter_group.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.parameter_group.id
vpc_security_group_ids = [aws_security_group.sg_public.id]
enabled_cloudwatch_logs_exports = ["postgresql"]
}
resource "aws_security_group" "sg_public" {
vpc_id = module.vpc.vpc_id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Allowing traffic in from all sources
}
egress {
from_port = 0 # Allowing any incoming port
to_port = 0 # Allowing any outgoing port
protocol = "-1" # Allowing any outgoing protocol
cidr_blocks = ["0.0.0.0/0"] # Allowing traffic out to all IP addresses
}
}
From the documentation of the used VPC module, in order to have public access for the database, you need the following:
create_database_subnet_group = true
create_database_subnet_route_table = true
create_database_internet_gateway_route = true
enable_dns_hostnames = true
enable_dns_support = true
create_database_nat_gateway_route should not be true. If we take a look at the code for the module on github:
resource "aws_route" "database_internet_gateway" {
count = var.create_vpc && var.create_igw && var.create_database_subnet_route_table && length(var.database_subnets) > 0 && var.create_database_internet_gateway_route && false == var.create_database_nat_gateway_route ? 1 : 0
route_table_id = aws_route_table.database[0].id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.this[0].id
timeouts {
create = "5m"
}
}
We can see that the count for the route for the internet gateway will be 0. This means the route which would allow public internet access is not created for the database subnet.
In the other hand setting create_database_internet_gateway_route to true will block access through the NAT gateway as well, since the route table wont have the correct route.
resource "aws_route" "database_nat_gateway" {
count = var.create_vpc && var.create_database_subnet_route_table && length(var.database_subnets) > 0 && false == var.create_database_internet_gateway_route && var.create_database_nat_gateway_route && var.enable_nat_gateway ? var.single_nat_gateway ? 1 : length(var.database_subnets) : 0
route_table_id = element(aws_route_table.database.*.id, count.index)
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = element(aws_nat_gateway.this.*.id, count.index)
timeouts {
create = "5m"
}
}
Essentially you block all the traffic by setting both variables to true.
Trying to create a MACRO to run a pivot table and I need help debugging. Here is the script:
Sheets.Add
newsheet = ActiveSheet.Name
ActiveWorkbook.PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _
dataname, Version:=xlPivotTableVersion10). _
CreatePivotTable TableDestination:=newsheet & "!R3C1", TableName:="PivotTable2" _
, DefaultVersion:=xlPivotTableVersion10
Sheets(newsheet).Select
Cells(3, 1).Select
With ActiveSheet.PivotTables("PivotTable2")
.ColumnGrand = True
.HasAutoFormat = True
.DisplayErrorString = False
.DisplayNullString = True
.EnableDrilldown = True
.ErrorString = ""
.MergeLabels = False
.NullString = ""
.PageFieldOrder = 2
.PageFieldWrapCount = 0
.PreserveFormatting = True
.RowGrand = True
.SaveData = True
.PrintTitles = False
.RepeatItemsOnEachPrintedPage = True
.TotalsAnnotation = False
.CompactRowIndent = 1
.InGridDropZones = True
.DisplayFieldCaptions = True
.DisplayMemberPropertyTooltips = False
.DisplayContextTooltips = True
.ShowDrillIndicators = True
.PrintDrillIndicators = False
.AllowMultipleFilters = True
.SortUsingCustomLists = True
.FieldListSortAscending = False
.ShowValuesRow = True
.CalculatedMembersInFilters = False
.RowAxisLayout xlTabularRow
End With
With ActiveSheet.PivotTables("PivotTable2").PivotCache
.RefreshOnFileOpen = False
.MissingItemsLimit = xlMissingItemsDefault
End With
ActiveSheet.PivotTables("PivotTable2").RepeatAllLabels xlRepeatLabels
With ActiveSheet.PivotTables("PivotTable2").PivotFields("Status")
.Orientation = xlRowField
.Position = 1
End With
ActiveSheet.PivotTables("PivotTable2").AddDataField ActiveSheet.PivotTables( _
"PivotTable2").PivotFields("Status"), "Count of Status", xlCount
End Sub
The error message I receive is run-time error 1004
application-defined or object-declined error
The line that is the issue is below:
.DisplayMemberPropertyTooltips = False
How do I correct? Thanks!!!
Good Morning!
Currently I have set up my structure in Fiware saving my historical records in MongoDB, for this I have been using Mlab as a hosting.
I attache the configuration file of my agent, the problem comes in that due to the mandatory character "/" of the service path I can not access the generated historical data, since it is a character not allowed for collections in MongoDB.
agent_1.conf
cygnus-ngsi.sources = http-source
cygnus-ngsi.sinks = mongo-sink
cygnus-ngsi.channels = mongo-channel
cygnus-ngsi.sources.http-source.channels = mongo-channel
cygnus-ngsi.sources.http-source.type = org.apache.flume.source.http.HTTPSource
cygnus-ngsi.sources.http-source.port = 5050
cygnus-ngsi.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.NGSIRestHandler
cygnus-ngsi.sources.http-source.handler.notification_target = /notify
cygnus-ngsi.sources.http-source.handler.default_service = default
cygnus-ngsi.sources.http-source.handler.default_service_path = /sevilla
cygnus-ngsi.sources.http-source.handler.events_ttl = 2
cygnus-ngsi.sources.http-source.interceptors = ts
cygnus-ngsi.sources.http-source.interceptors.ts.type = timestamp
cygnus-ngsi.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.NGSIMongoSink
cygnus-ngsi.sinks.mongo-sink.channel = mongo-channel
cygnus-ngsi.sinks.mongo-sink.enable_encoding = false
cygnus-ngsi.sinks.mongo-sink.enable_grouping = false
cygnus-ngsi.sinks.mongo-sink.enable_name_mappings = false
cygnus-ngsi.sinks.mongo-sink.enable_lowercase = false
cygnus-ngsi.sinks.mongo-sink.data_model = dm-by-service-path
cygnus-ngsi.sinks.mongo-sink.attr_persistence = row
cygnus-ngsi.sinks.mongo-sink.mongo_hosts = ds******.mlab.com:35866
cygnus-ngsi.sinks.mongo-sink.mongo_username = my_user
cygnus-ngsi.sinks.mongo-sink.mongo_password = ********
cygnus-ngsi.sinks.mongo-sink.db_prefix = sth_
cygnus-ngsi.sinks.mongo-sink.collection_prefix = sth_
cygnus-ngsi.sinks.mongo-sink.batch_size = 1
cygnus-ngsi.sinks.mongo-sink.batch_timeout = 30
cygnus-ngsi.sinks.mongo-sink.batch_ttl = 10
cygnus-ngsi.sinks.mongo-sink.data_expiration = 0
cygnus-ngsi.sinks.mongo-sink.collections_size = 0
cygnus-ngsi.sinks.mongo-sink.max_documents = 0
cygnus-ngsi.sinks.mongo-sink.ignore_white_spaces = true
cygnus-ngsi.channels.mongo-channel.type = com.telefonica.iot.cygnus.channels.CygnusMemoryChannel
cygnus-ngsi.channels.mongo-channel.capacity = 1000
cygnus-ngsi.channels.mongo-channel.transactionCapacity = 100
Is there any way for Cygnus to remove the "/" character from the service path?
Error: http://www.subirimagenes.com/imagedata.php?url=http://s2.subirimagenes.com/imagen/9827048captura-de-pantalla.png
SOLUTION: You just have to change the enconding to true in the agent configuration
cygnus-ngsi.sinks.mongo-sink.enable_encoding = true
Thank you very much!