Error: an empty namespace may not be set during creation - kubernetes

I have a namespace already available in a k8s cluster. However, it is empty. Next, I am trying to create k8s configmap, deployment, service, and secrets in it using k8s provider terraform.
However, it is giving error as below for every object.
stdout | 11/23/2022, 5:31:06 PM | Checking state lock
stdout | 11/23/2022, 5:31:06 PM | $ terraform apply -input=false -no-color tfplan
stdout | 11/23/2022, 5:31:06 PM | [OUTPUT]
stdout | 11/23/2022, 5:31:07 PM | Acquiring state lock. This may take a few moments...
stdout | 11/23/2022, 5:31:09 PM | module.service.kubernetes_service.test_nginx: Creating...
stdout | 11/23/2022, 5:31:09 PM | module.configmap.kubernetes_config_map.nginx_translations_resourcebundle: Creating...
stdout | 11/23/2022, 5:31:09 PM | module.configmap.kubernetes_config_map.nginx_config: Creating...
stdout | 11/23/2022, 5:31:10 PM | module.deployment.kubernetes_deployment.test_nginx: Creating...
stderr | 11/23/2022, 5:31:10 PM |
stderr | 11/23/2022, 5:31:10 PM | Error: an empty namespace may not be set during creation
on configmap/configmap.tf line 7, in resource "kubernetes_config_map" "config":
7: resource "kubernetes_config_map" "nginx_ext_config" {
This is the tf file for configmap.
locals {
directory_external_config = "./configmap/config"
}
resource "kubernetes_config_map" "config" {
metadata {
name = "test-nginx"
namespace = var.TEST_NAMESPACE
}
data = {
for f in fileset(local.directory_external_config, "*") :
f => file(join("/", [local.directory_external_config, f]))
}
}
Here, I am passing the namespace from a terraform variable (namespace = var.TEST_NAMESPACE).
Please guide me on what I am missing.

Related

Exclude kubernetes namespaces from metric collection by datadog agent

After installing datadog chart (version=3.1.3) on EKS, I need to limit some of namespaces from metric collection. I'm using containerExclude to limit the namespace scope. The values I'm using is as follows:
datadog:
site: datadoghq.eu
clusterName: eks-test
kubeStateMetricsEnabled: false
kubeStateMetricsCore:
enabled: false
containerExclude: "kube_namespace:astronomer kube_namespace:astronomer-.* kube_namespace:kube-system kube_namespace:kube-public kube_namespace:kube-node-lease"
prometheusScrape:
enabled: true
serviceEndpoints: true
additionalConfigs:
- autodiscovery:
kubernetes_annotations:
include:
prometheus.io/scrape: "true"
exclude:
prometheus.io/scrape: "false"
clusterAgent:
enabled: true
metricsProvider:
enabled: false
agents:
enabled: true
Looking at the pod environment variables, I see this is being passed to agent correctly:
DD_CONTAINER_EXCLUDE: kube_namespace:astronomer kube_namespace:astronomer-.* kube_namespace:kube-system kube_namespace:kube-public kube_namespace:kube-node-lease
However I see see scrape logs from those namespaces:
2022-09-26 13:48:38 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:f744b75c375b067b | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.79.111:9102/metrics
2022-09-26 13:48:38 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:c52e8d14335bb33d | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.22.119:9102/metrics
2022-09-26 13:48:39 UTC | CORE | ERROR | (pkg/collector/python/datadog_agent.go:123 in LogMessage) | openmetrics:ba505488f569ffa0 | (base.py:66) | There was an error scraping endpoint http://172.20.0.10:9153/metrics: HTTPConnectionPool(host='172.20.0.10', port=9153): Max retries exceeded with url: /metrics (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fd271df8190>, 'Connection to 172.20.0.10 timed out. (connect timeout=10.0)'))
2022-09-26 13:48:39 UTC | CORE | ERROR | (pkg/collector/worker/check_logger.go:69 in Error) | check:openmetrics | Error running check: [{"message": "There was an error scraping endpoint http://172.20.0.10:9153/metrics: HTTPConnectionPool(host='172.20.0.10', port=9153): Max retries exceeded with url: /metrics (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fd271df8190>, 'Connection to 172.20.0.10 timed out. (connect timeout=10.0)'))", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 1116, in run\n self.check(instance)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/openmetrics/v2/base.py\", line 67, in check\n raise_from(type(e)(\"There was an error scraping endpoint {}: {}\".format(endpoint, e)), None)\n File \"<string>\", line 3, in raise_from\nrequests.exceptions.ConnectTimeout: There was an error scraping endpoint http://172.20.0.10:9153/metrics: HTTPConnectionPool(host='172.20.0.10', port=9153): Max retries exceeded with url: /metrics (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7fd271df8190>, 'Connection to 172.20.0.10 timed out. (connect timeout=10.0)'))\n"}]
2022-09-26 13:48:39 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:8e74b74102a3722 | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.129.200:9102/metrics
2022-09-26 13:48:39 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:e6012834c5d9bc2e | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.19.241:9127/metrics
2022-09-26 13:48:39 UTC | CORE | INFO | (pkg/collector/python/datadog_agent.go:127 in LogMessage) | openmetrics:ba98e825c73ee1b4 | (base.py:60) | Scraping OpenMetrics endpoint: http://172.20.64.136:9102/metrics
Those services belong to kube-system which is excluded. But I see that these metrics belong to Prometheus [Openmetrics]. Do I have to apply similar setting in prometheusScrape.additionalConfigs?

Get Specific Strings from File and Store in Variable

My sample log looks like :
2022-09-01 23:13:05Z | error | 2022-09-02 02:13:05 - [Task] Id:120 Name:OPT_VIM_1HEAD Exception with index:18 | 18.9251137 | Exception:
ERROR connection to partner '10.19.101.17:3300' broken
2022-09-01 23:13:25Z | error | 2022-09-02 02:13:25 - [Task] Id:121 Name:OPT_VIM_1ITEM
ERROR connection to partner '10.19.101.22:3300' broken
2022-09-01 23:13:25Z | error | 2022-09-02 02:13:25 - [Task] Id:121 Name:OPT_VIM_1ITEM RunId:7 Task execution failed with error: One or more errors occurred., detail:
ERROR connection to partner '10.19.101.22:3300' broken
I want to extract the job name OPT_VIM_1HEAD or OPT_VIM_1ITEM (its dynamic) and also the timestamp after the "error" pattern : 2022-09-02 02:13:25 or 2022-09-02 02:13:05 in different variables.
I have also written the script as :
$dir = 'C:\ProgramData\AecorsoftDataIntegrator\logs\'
$StartTime = get-date
$fileList = (Get-ChildItem -Path $dir -Filter '2022-09-02.log' | Sort-Object LastWriteTime -Descending | Select-Object -First 1).fullname
$message = Get-Content $fileList | Where-Object {$_ -like ‘*error*’}
$message
$details = Select-String -LiteralPath $fileList -Pattern 'error' -Context 0,14 | Select-Object -First 1 | Select-Object Path, FileName, Pattern, Linenumber
$details[0]
But not able to retrieve the tokens mentioned above
Use regex processing via the -match operator to extract the tokens of interest from each line:
# Sample lines from the log file.
$logLines = #'
2022-09-01 23:13:05Z | error | 2022-09-02 02:13:05 - [Task] Id:120 Name:OPT_VIM_1HEAD Exception with index:18 | 18.9251137 | Exception:
ERROR connection to partner '10.19.101.17:3300' broken
2022-09-01 23:13:25Z | error | 2022-09-02 02:13:25 - [Task] Id:121 Name:OPT_VIM_1ITEM
ERROR connection to partner '10.19.101.22:3300' broken
2022-09-01 23:13:25Z | error | 2022-09-02 02:13:25 - [Task] Id:121 Name:OPT_VIM_1ITEM RunId:7 Task execution failed with error: One or more errors occurred., detail:
ERROR connection to partner '10.19.101.22:3300' broken
'# -split '\r?\n'
# Process each line...
$logLines | ForEach-Object {
# ... by matching it ($_) against a regex with capture groups - (...) -
# using the -match operator.
if ($_ -match '\| (\d{4}-.+?) - \[.+? Name:(\w+)') {
# The line matched.
# Capture groups 1 and 2 in the automatic $Matches variable contain
# the tokens of interest; assign them to variables.
$timestamp = $Matches.1
$jobName = $Matches.2
# Sample output, as an object
[PSCustomObject] #{
JobName = $jobName
Timestamp = $timestamp
}
}
}
Output:
JobName Timestamp
------- ---------
OPT_VIM_1HEAD 2022-09-02 02:13:05
OPT_VIM_1ITEM 2022-09-02 02:13:25
OPT_VIM_1ITEM 2022-09-02 02:13:25
For an explanation of the regex and the ability to experiment with it, see this regex101.com page.

Yocto glibc_2.30.bb fatal error: asm/unistd.h: No such file or directory

I am trying to build Yocto Zeus in Podman and getting the below error. I noticed that sigcontext.h header file .recipe-sysroot/usr/include/ only has 32bit version whereas unistd.h file copied under asm-generic directory.
./recipe-sysroot/usr/include/asm/sigcontext-32.h
./recipe-sysroot/usr/include/asm-generic/unistd.h
| ../sysdeps/unix/sysv/linux/sys/syscall.h:24:10: fatal error: asm/unistd.h: No such file or directory
| 24 | #include <asm/unistd.h>
| | ^~~~~~~~~~~~~~
| compilation terminated.
| Traceback (most recent call last):
| File "../scripts/gen-as-const.py", line 120, in <module>
| main()
| File "../scripts/gen-as-const.py", line 116, in main
| consts = glibcextract.compute_c_consts(sym_data, args.cc)
| File "/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/git/scripts/glibcextract.py", line 62, in compute_c_consts
| subprocess.check_call(cmd, shell=True)
| File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
| raise CalledProcessError(retcode, cmd)
| subprocess.CalledProcessError: Command 'arm-poky-linux-gnueabi-gcc -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a7 --sysroot=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot -std=gnu11 -fgnu89-inline -O2 -pipe -g -feliminate-unused-debug-types -fmacro-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0=/usr/src/debug/glibc/2.30-r0 -fdebug-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0=/usr/src/debug/glibc/2.30-r0 -fdebug-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot= -fdebug-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot-native= -Wall -Wwrite-strings -Wundef -Werror -fmerge-all-constants -frounding-math -fno-stack-protector -Wstrict-prototypes -Wold-style-definition -fmath-errno -ftls-model=initial-exec -I../include -I/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/csu -I/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi -I../sysdeps/unix/sysv/linux/arm -I../sysdeps/arm/nptl -I../sysdeps/unix/sysv/linux/include -I../sysdeps/unix/sysv/linux -I../sysdeps/nptl -I../sysdeps/pthread -I../sysdeps/gnu -I../sysdeps/unix/inet -I../sysdeps/unix/sysv -I../sysdeps/unix/arm -I../sysdeps/unix -I../sysdeps/posix -I../sysdeps/arm/armv7/multiarch -I../sysdeps/arm/armv7 -I../sysdeps/arm/armv6t2 -I../sysdeps/arm/armv6 -I../sysdeps/arm/include -I../sysdeps/arm -I../sysdeps/wordsize-32 -I../sysdeps/ieee754/flt-32 -I../sysdeps/ieee754/dbl-64 -I../sysdeps/ieee754 -I../sysdeps/generic -I.. -I../libio -I. -nostdinc -isystem /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi/../../lib/arm-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/9.2.0/include -isystem /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi/../../lib/arm-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/9.2.0/include-fixed -isystem /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot/usr/include -D_LIBC_REENTRANT -include /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/libc-modules.h -DMODULE_NAME=libc -include ../include/libc-symbols.h -DTOP_NAMESPACE=glibc -DGEN_AS_CONST_HEADERS -MD -MP -MF /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h.dT -MT '/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h.d /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h' -S -o /tmp/tmp2wx6srl6/test.s -x c - < /tmp/tmp2wx6srl6/test.c' returned non-zero exit status 1
| make[2]: *** [../Makerules:271: /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h] Error 1
| make[2]: *** Waiting for unfinished jobs....
| In file included from ../signal/signal.h:291,
| from ../include/signal.h:2,
| from ../misc/sys/param.h:28,
| from ../include/sys/param.h:1,
| from ../sysdeps/generic/hp-timing-common.h:39,
| from ../sysdeps/generic/hp-timing.h:25,
| from ../nptl/descr.h:27,
| from ../sysdeps/arm/nptl/tls.h:42,
| from ../sysdeps/unix/sysv/linux/arm/tls.h:23,
| from ../include/link.h:51,
| from ../include/dlfcn.h:4,
| from ../sysdeps/generic/ldsodefs.h:32,
| from ../sysdeps/arm/ldsodefs.h:38,
| from ../sysdeps/gnu/ldsodefs.h:46,
| from ../sysdeps/unix/sysv/linux/ldsodefs.h:25,
| from ../sysdeps/unix/sysv/linux/arm/ldsodefs.h:22,
| from <stdin>:2:
| ../sysdeps/unix/sysv/linux/bits/sigcontext.h:30:11: fatal error: asm/sigcontext.h: No such file or directory
| 30 | # include <asm/sigcontext.h>
| | ^~~~~~~~~~~~~~~~~~
| compilation terminated.
|
ERROR: Task (/home/dev/inode_zeus/sources/poky/meta/recipes-core/glibc/glibc_2.30.bb:do_compile) failed with exit code '1'
DEBUG: Teardown for bitbake-worker
NOTE: Tasks Summary: Attempted 437 tasks of which 430 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/dev/inode_zeus/sources/poky/meta/recipes-core/glibc/glibc_2.30.bb:do_compile
Please note that I am able to build Jethro version using the Podman Container which runs Ubuntu16.04.
But, Zeus build is failing. Can someone tell me why these errors are seen?
I am able to resolve the issue by mapping the yocto build directory with host directory.
Yocto build worked liked a charm!
podman --storage-opt overlay.mount_program=/usr/bin/fuse-overlayfs --storage-opt overlay.mountopt=nodev,metacopy=on,noxattrs=1 run -it -v $PWD/my_yocto/build_output:/home/oibdev/yocto/build 4cbcb3842ed5

Bad or inaccessible location specified in external data source

I'm trying to save a file from Azure File Storage into Azure SQL Database table varbinary(max) column (store whole content as advised in this answer). I've tried a few times to adjust my SQL query but without success. Here's the code which results in error 'Bad or inaccessible location specified in external data source "my_Azure_Files".' when it invokes OPENROWSET:
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'mypassword123'
GO
CREATE DATABASE SCOPED CREDENTIAL [https://mystorageaccount.file.core.windows.net/]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sas_token_generated_on_azure_portal';
CREATE EXTERNAL DATA SOURCE my_Azure_Files
WITH (
LOCATION = 'https://mystorageaccount.file.core.windows.net/test',
CREDENTIAL = [https://mystorageaccount.file.core.windows.net/],
TYPE = BLOB_STORAGE
);
Insert into dbo.myTable(targetColumn)
Select BulkColumn FROM OPENROWSET(
BULK 'test.csv',
DATA_SOURCE = 'my_Azure_Files',
SINGLE_BLOB) AS testFile;
CLOSE MASTER KEY;
GO
I'm able to download the test.csv file by a web-browser using the same SAS token and url path. I'm also able to verify that the credential and the external source are successfully created in the database:
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| data_source_id | name | location | type_desc | type | resource_manager_location | credential_id | database_name | shard_map_name | connection_options | pushdown |
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| 65540 | my_Azure_Files | https://mystorageaccount.file.core.windows.net/test | BLOB_STORAGE | 05/01/1900 00:00 | NULL | 65539 | NULL | NULL | NULL | ON |
| name | principal_id | credential_id | credential_identity | create_date | modify_date | target_type | target_id | | | |
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| https://mystorageaccount.file.core.windows.net/ | 1 | 65539 | SHARED ACCESS SIGNATURE | 15/07/2020 13:14 | 15/07/2020 13:14 | NULL | NULL | | | |
When creating SAS on Azure portal I checked all allowed resource types and all allowed permissions, except 'Delete'. I also removed the leading '?' from SAS to use in the SECRET field.
I've tried variations of TYPE = BLOB_STORAGE and TYPE = HADOOP as well as SINGLE_BLOB, SINGLE_CLOB and SINGLE_NCLOB parameters.
Please help me solve my problem.
By following below steps, able to successfully insert into the target table:
While generating the SAS, please select Allowed Resource Type as ‘Container’ and ‘Object’:
Copy the SAS and use below command:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password#123'
Use SAS token generated without ‘?’ at the start and create Scoped Credentials:
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential WITH IDENTITY =
'SHARED ACCESS SIGNATURE', SECRET = 'sv=2019-10-
10XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';
Create External Data Source referencing your blob path:
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://mystorageaccount.file.core.windows.net'
, CREDENTIAL= MyAzureBlobStorageCredential
);
Run the insert using OPENROWSET:
Insert into dbo.test(name1)
Select BulkColumn FROM OPENROWSET(
BULK 'test/test.csv',
DATA_SOURCE = 'MyAzureBlobStorage',
SINGLE_BLOB) AS testFile;
Can also use Bulk insert:
BULK INSERT dbo.test
FROM 'test/test.csv'
WITH (DATA_SOURCE = 'MyAzureBlobStorage',
FORMAT = 'CSV');
Assuming table dbo.test is already created

How to split the denominator value from one column and store it in another column using perl?

My example code output:
time | name | status | s_used | s_max |
+------------+-------------+-----------+------------+-----------+
| 1482222363 | asf | Closed | 0/16 | 0 |
| 1482222363 | as0 | Available | 4/16 | 4 |
I have attached the part of my output which is generated using perl cgi script and mysql database.
My query is how to take denominator value from the column s_used and store only the denominator values in the s_max column using perl.
3.I had attached the following part of code which i tried.
if($i == 4){
if(/s_used/){
print;
}
else{
chomp();
my($num,$s_max)=split /\//,$table_data{2}{'ENTRY'};
print $s_max;
}
}
Code Explanation:
$i == 4 is the column where should I store the variable.
I got time column from the sql database $time, name I got from $table_data{0}{'ENTRY'}, status from $table_data{1}{'ENTRY'}, s_used from $table_Data{2}{'ENTRY'}.
Expected output:
time | name | status | s_used | s_max |
+------------+-------------+-----------+------------+-----------+
| 1482222363 | asf | Closed | 0/16 | 16 |
| 1482222363 | as0 | Available | 4/16 | 16 |
Seems your code "my($num,$s_max)=split /\//,$table_data{2}{'ENTRY'};" is right.
Somehow the value $s_max at the time it's writing to the DB is incorrect. Since you did not post the portion of code to show the part $s_max writing back to the DB, you need to check what value is in $s_max (e.g. printing the $s_max value) at the time right before writing it back to DB. From there, please try to trace back why an incorrect value is assigned to $s_max. Then, the problem would be solved.