I have made the Nginx Upload working normally with Python (Tornado). I save the paths of the uploaded files in the database.
However, I wonder why the upload module has to split my uploads and put them into 10 different folders /var/www/.../uploads/0,1,2,3,4,5...9 ? The comment below says the files were hashed, what and why the module does this?
# Store files to this directory
# The directory is hashed, subdirectories 0 1 2 3 4 5 6 7 8 9 should exist
upload_store /var/www/...uploads 1;
# filesystem location where we store uploads
#
# The second argument is the level of "hashing" that nginx will perform
# on the filenames before storing them to the filesystem. I can't find
# any documentation online, so as an example, say we were using this
# configuration:
#
# upload_store /tmp/uploads 2 1;
#
# A file named '43829042' would be written to this path:
#
# /tmp/uploads/42/0/43829042
#
# I hope that's clear enough. The argument is required and must be
# greater than 0. You can see the implementation here:
#
# http://lxr.evanmiller.org/http/source/core/ngx_file.c#L118
Source: http://bclennox.com/extremely-large-file-uploads-with-nginx-passenger-rails-and-jquery
Related
I have been trying to get this done for two days now. I have had different error to deal with. I am Using an M1 Apple Chip Mac pro and Xcode 13.4 and it has been difficult to get Sonarqube running. I finally found a docker image which is M1 specific and I have been able to get Sonarqube running locally.
My current challenge is having the test result sent to the Sonarqube project.
I have tried several method which includes
xcrun xccov view YourPathToThisFile/*.xccovreport --json
This script is not working even though I wanted an xml format.
Is there a better way to have the code coverage report sent to sonarqube. I have Sonarqube running but the test result and coverage is not showing. Sonarqube page currently says "The main branch has no lines of code."
NB: I am running Sonarqube with Docker
Below is my Sonarqube properties file.
#
# Swift SonarQube Plugin - Enables analysis of Swift and Objective-C projects into SonarQube.
# Copyright © 2015 Backelite (${email})
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Sonar Server details
sonar.host.url=http://localhost:9000/
sonar.login=782a04fee8bfc7ae181f04bbd13734eb89e5580c
# sonar.password=admin
# Project Details
sonar.projectKey=tinggios
sonar.projectName=TinggIOSApp
sonar.projectDescription=This is TinggiOS
# Comment if you have a project with mixed ObjC / Swift
sonar.language=swift
sonar.projectKey=tinggios
sonar.qualitygate.wait=true
# Path to source directories
# sonar.sources=SonarDemo,SonarDemoTests,SonarDemoUITests
sonar.sources=.
# Exclude directories
sonar.test.inclusions=**/*Test*/**
sonar.test.inclusions=*.swift
sonar.exclusions=**/*.xml,Pods/**/*,Reports/**/*
# sonar.inclusions=*.swift
# Path to test directories (comment if no test)
sonar.tests=TinggIOS/Core/Tests/CoreTests,TinggIOS/Home/Tests/HomeTests,TinggIOS/OnboardingUITest
# Destination Simulator to run surefire
# As string expected in destination argument of xcodebuild command
# Example = sonar.swift.simulator=platform=iOS Simulator,name=iPhone 6,OS=9.2
# sonar.swift.simulator=platform=iOS Simulator,name=iPhone 7,OS=12.0
sonar.swift.simulator=platform=iOS Simulator,name=iPhone 11,OS=15
# Xcode project configuration (.xcodeproj)
# and use the later to specify which project(s) to include in the analysis (comma separated list)
# Specify either xcodeproj or xcodeproj + xcworkspace
sonar.swift.project=TinggIOS/TinggIOS.xcodeproj
sonar.swift.workspace=TinggIOS/TinggIOS.xcworkspace
sonar.language=swift
sonar.c.file.suffixes=-
sonar.cpp.file.suffixes=-
sonar.objc.file.suffixes=-
# Specify your appname.
# This will be something like "myApp"
# Use when basename is different from targeted scheme.
# Or when slather fails with 'No product binary found'
sonar.swift.appName=TinggIOS
# Scheme to build your application
sonar.swift.appScheme=TinggIOS
# Configuration to use for your scheme. if you do not specify that the default will be Debug
sonar.swift.appConfiguration=Debug
##########################
# Optional configuration #
##########################
# Encoding of the source code
sonar.sourceEncoding=UTF-8
# SCM
# sonar.scm.enabled=true
# sonar.scm.url=scm:git:http://xxx
# JUnit report generated by run-sonar.sh is stored in sonar-reports/TEST-report.xml
# Change it only if you generate the file on your own
# The XML files have to be prefixed by TEST- otherwise they are not processed
sonar.junit.reportsPath=sonar-reports/TEST-report.xml
# Cobertura report generated by run-sonar.sh is stored in sonar-reports/coverage-swift.xml
# Change it only if you generate the file on your own
sonar.swift.coverage.reportPattern=sonar-reports/coverage-swift*.xml
#sonar.coverageReportPaths=sonarqube-generic-coverage.xml
#sonar.swift.coverage.reportPattern=sonar-reports/cobertura.xml
# OCLint report generated by run-sonar.sh is stored in sonar-reports/oclint.xml
# Change it only if you generate the file on your own
sonar.swift.swiftlint.report=sonar-reports/*swiftlint.txt
# Change it only if you generate the file on your own
sonar.swift.tailor.report=sonar-reports/*tailor.txt
# Paths to exclude from coverage report (surefire, 3rd party libraries etc.)
# sonar.swift.excludedPathsFromCoverage=pattern1,pattern2
# sonar.swift.excludedPathsFromCoverage=.*Tests.*,
##########################
# Tailor configuration #
##########################
# Tailor configuration
# -l,--max-line-length=<0-999> maximum Line length (in characters)
# --list-files display Swift source files to be analyzed
# --max-class-length=<0-999> maximum Class length (in lines)
# --max-closure-length=<0-999> maximum Closure length (in lines)
# --max-file-length=<0-999> maximum File length (in lines)
# --max-function-length=<0-999> maximum Function length (in lines)
# --max-name-length=<0-999> maximum Identifier name length (in characters)
# --max-severity=<error|warning (default)> maximum severity
# --max-struct-length=<0-999> maximum Struct length (in lines)
# --min-name-length=<1-999> minimum Identifier name length (in characters)
sonar.swift.tailor.config=--no-color --max-line-length=100 --max-file-length=500 --max-name-length=40 --max-name-length=40 --min-name-length=4
We are currently using cloud formation to create a glue job (via codebuild and codepipeline). The one thing we are stuck on is how to automate the code that goes into the glue job.
Our current relevant piece of the cloudformation template looks like this:
MyJob:
Type: AWS::Glue::Job
Properties:
Command:
Name: glueetl
ScriptLocation: "s3://aws-glue-scripts//your-script-file.py"
DefaultArguments:
"--job-bookmark-option": "job-bookmark-enable"
ExecutionProperty:
MaxConcurrentRuns: 2
MaxRetries: 0
Name: cf-job1
Role: !Ref MyJobRole
The problem is is the "ScriptLocation". Looks like it is required to be an S3 location. How can we automate the upload of this. The code is in a .py file in our Git repository and I assume is uploaded to the artifact repository as are of the codebuild process, but how to access it?
Would like to hear how others are doing this. Thanks!
EDIT: I was able to find a similar stack overflow post:AWS Glue automatic job creation but it the answers really don't give a solution or understand the question posed.
I've written a tool to handle the upload of stack dependencies, including CloudFormation nested templates and non-inline Lambda functions.
Currently AWS Glue was not handled since I haven't try it in any project yet. But it should be easy to expand to support Glue.
The dependencies were defined in separate config file, and a piece of code within the tool is responsible for the config. Here's the sample config:
Nested CloudFormation templates:
# DEPENDS=( <ParameterName>=<NestedTemplate> )
#
# Required: Yes if has nested template, otherwise No
# Default: None
# Syntax:
# <ParameterName>: The name of template parameter that is referred at the
# value of nested template property `TemplateURL`.
# <NestedTemplate>: A local path or a S3 URL starting with `s3://` or
# `https://` pointing to the nested template.
# The nested templates at local is going to be uploaded
# to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# NestedTemplateFooURL=/path/to/nested/foo/stack.json
# NestedTemplateBarURL=/path/to/nested/bar/stack.json
# )
Lambda functions:
# LAMBDA=( <S3BucketParameterName>:<S3KeyParameterName>=<LambdaFunction> )
#
# Required: Yes if has None-inline Lambda Function, otherwise No
# Default: None
# Syntax:
# <S3BucketParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Bucket`.
# <S3KeyParameterName>: The name of template parameter that is referred
# at the value of Lambda property `Code.S3Key`.
# <LambdaFunction>: A local path or a S3 URL starting with `s3://` pointing
# to the Lambda Function.
# The Lambda Functions at local is going to be zipped and
# uploaded to S3 Bucket automatically during the deployment.
# Description:
# Double quote the pairs which contain whitespaces or special characters.
# Use `#` to comment out.
# ---
# Example:
# DEPENDS=(
# S3BucketForLambdaFoo:S3KeyForLambdaFoo=/path/to/LambdaFoo.py
# S3BucketForLambdaBar:S3KeyForLambdaBar=s3://mybucket/LambdaBar.py
# )
The tools were written in bash and come with 2 parts:
xsh: It works as a bash library framework.
xsh-lib/aws: It's a library of xsh.
The code you may need to expand is located in xsh-lib/aws/functions/cfn/deploy.sh.
The example deploy command looks like:
$ xsh aws/cfn/deploy -C /path/to/your/template-and-config-dir -t stack.json -c sample.conf
I'm considering to abstract the dependencies such as CloudFormation template, Lambda functions and Glue, into a single interface for both configs and handlers.
This will make it easier to add new dependency handlers to the deployer.
My repository is set up similar to the following:
repo_base
- artwork
- app
- designsystem
- api
Since each of the other folders in the repo (e.g. app, api, designsystem) depend on artwork, I have symlinks in place when running locally. This is working fine, as the path for images in the designsystem subdirectory is something like ../../artwork. When you check out the repository, the entire tree is checked out, so the symlinks are pointing to the correct directory.
However, when I deploy with capistrano, I use :repo_tree to only deploy a portion of the overall monorepo. For example, the deploy.rb script for the designsystem folder looks like:
# config valid for current version and patch releases of Capistrano
lock "~> 3.11.0"
set :application, "designsystem"
set :repo_url, "git#gitlab.com:myuser/mymonorepo"
set :deploy_to, "/var/www/someplace.net/designsystem.someplace.net"
set :deploy_via, "remote_cache_with_project_root"
set :repo_tree, 'designsystem'
set :log_level, :error
before 'deploy:set_current_revision', 'deploy:buildMonolith'
The problem, of course, is that this only ends up deploying the designsystem subdirectory. Thus, the symlinks aren't valid, and are actually skipped in the building (buildMonolith step).
I'm wondering how I might go about having capistrano check out another subdirectory, artwork, and placing it somewhere in the repository source tree.
I was able to solve this by adding a capistrano task called assets.rb:
require 'pathname'
##
# Import assets from a top level monorepo directory into the current working
# directory.
#
# When you use :repo_tree to deploy a specific directory of a monorepo, but your
# asset repository is in a different directory, you need to check out this
# top-level directory and add it to the deployment path. For example, if your
# monorepo directory structure looks something like:
#
# - /app
# - src/
# - assets -> symlink to ../../assets
# - /assets
# - /api
#
# And you want to deploy /app, the symlink to the upper directory won't exist if
# capistrano is configured to use :repo_tree "app". In order to overcome this,
# this task checks out a specified piece of the larger monorepo (in this case,
# the assets directory), and places it in the deployment directory at a
# specified location.
#
# Configuration:
# In your deploy/<stage>.rb file, you will need to specify two variables:
# - :asset_path - The location within the deployment directory where the
# assets should be placed. Relative to the deployment working
# directory.
# - :asset_source - The location of the top-level asset folder in the
# monorepo. Relative to the top level of the monorepo (i.e.
# the directory that would be used as a deployment if
# :repo_tree was not specified).
#
# In the above example, you would specify:
#
# set :asset_path, "src/assets"
# set :asset_source, "assets"
#
namespace :deploy do
desc "Import assets from a top-level monorepo directory"
task :import_assets do
on roles(:all) do |host|
within repo_path do
final_asset_location = "#{release_path}/#{fetch(:asset_path)}"
asset_stat_result = capture "stat", "-t", "#{final_asset_location}"
asset_stat_result = asset_stat_result.split(" ")
if asset_stat_result[0] == "#{final_asset_location}"
info "Removing existing asset directory #{final_asset_location}..."
execute "rm", "-rf", "#{final_asset_location}"
end
source_dir = Pathname.new(final_asset_location).parent.to_s
info "Importing assets to #{source_dir}/#{fetch(:asset_source)}"
execute "GIT_WORK_TREE=#{source_dir}", :git, "checkout", "#{fetch(:branch)}", "--", "#{fetch(:asset_source)}"
info "Moving asset directory #{source_dir}/#{fetch(:asset_source)} to #{final_asset_location}..."
execute :mv, "#{source_dir}/#{fetch(:asset_source)}", "#{final_asset_location}"
end
end
end
end
It would be nice if I could somehow link into the git scm plugin, rather than calling git from the command line directly.
I used graphite tagged metrics over grafana and whisper, but http://graphite/tags/delSeries removes something but not .wsp files.
And untagged metrics creates .wsp files in whisper data folder with human-readable names, but tagged metrics creates only hash-named folders and .wsp files in _tagged directory.
Like so:
/whisper
/data
/Players
registrations.wsp
today_registrations.wsp
/Gaming
playing_count.wsp
/_tagged
/f58
/010
f58010d4cef67599a31f4daaab4a53c4d7fd85a9faea546282d2058c40c7e7b9.wsp
/f56
/031
f56031052aec89dc9cc38e44dbe71b2eb08fb513a3e60d515eb1dc23f5b929d1.wsp
How to know .wsp file associated with my tagged metric?
I'm just running into that problem as well, how to map the actual path/tag metric to its corresponding hashed wsp file.
I don't think you can compute the actual metric name from the hash, but you can do the other way around, by using graphite's encoding methods.
I've quickly written a python script just for lab purpose:
- It can take several metric names in parameters and returns a mapping
Just log into your graphite host and create a python script in /opt/graphite/webapp/graphite/tags
#!/opt/graphite/bin/python3
import sys
from utils import TaggedSeries
for line in sys.stdin:
paths = line.split()
for path in paths:
# Normalize first
parsed = TaggedSeries.parse(path)
print( path + " -> /opt/graphite/storage/whisper/" + TaggedSeries.encode(parsed.path,'/',True) + ".wsp")
You can then pipe a list of metrics:
# echo "users.count;server=s1" |python mapper.py
users.count;server=s1 -> /opt/graphite/storage/whisper/_tagged/b6c/c91/b6cc916d608e4b145b318669606e79118cc41d316f96735dd43621db4fd2bcaf.wsp
You can also get all your tagged metrics and generate a file that you can later cat into the script. In this example i get all metrics associated associated with the tag 'server':
# curl -s "http://localhost/tags/findSeries?expr=server=~." | sed s/"\", \""/\\n/g > my_metrics
Then cat your metrics:
# cat my_metrics | python mapper.py
That's a starting point. From there you can easily do some simple scripting for deleting wsp files, like the ones not updated since a month by example.
graphite
I am trying to install net-snmp from scratch to make snmpv3 to work on my computer.
I did install net-snmp and create the user, but when I want to make snmpget it reject me with snmpget: Unknown user name
To install net-snmp I followed the official guide
I did install the packages libperl-dev, snmp-mibs-downloader and snmp too using sudo apt-get install
Here is my /usr/local/share/snmp configuration where you can find the particular line rouser neutg
###############################################################################
#
# EXAMPLE.conf:
# An example configuration file for configuring the Net-SNMP agent ('snmpd')
# See the 'snmpd.conf(5)' man page for details
#
# Some entries are deliberately commented out, and will need to be explicitly activated
#
###############################################################################
#
# AGENT BEHAVIOUR
#
# Listen for connections from the local system only
# agentAddress udp:127.0.0.1:161
# Listen for connections on all interfaces (both IPv4 *and* IPv6)
agentAddress udp:161,udp6:[::1]:161
###############################################################################
#
# SNMPv3 AUTHENTICATION
#
# Note that these particular settings don't actually belong here.
# They should be copied to the file /var/lib/snmp/snmpd.conf
# and the passwords changed, before being uncommented in that file *only*.
# Then restart the agent
# createUser authOnlyUser MD5 "remember to change this password"
# createUser authPrivUser SHA "remember to change this one too" DES
# createUser internalUser MD5 "this is only ever used internally, but still change the password"
# If you also change the usernames (which might be sensible),
# then remember to update the other occurances in this example config file to match.
###############################################################################
#
# ACCESS CONTROL
#
# system + hrSystem groups only
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
# Full access from the local host
#rocommunity public localhost
# Default access to basic system info
rocommunity public default -V systemonly
# rocommunity6 is for IPv6
rocommunity6 public default -V systemonly
# Full access from an example network
# Adjust this network address to match your local
# settings, change the community string,
# and check the 'agentAddress' setting above
#rocommunity secret 10.0.0.0/16
# Full read-only access for SNMPv3
rouser authOnlyUser
# Full write access for encrypted requests
# Remember to activate the 'createUser' lines above
#rwuser authPrivUser priv
# It's no longer typically necessary to use the full 'com2sec/group/access' configuration
# r[ow]user and r[ow]community, together with suitable views, should cover most requirements
###############################################################################
#
# SYSTEM INFORMATION
#
# Note that setting these values here, results in the corresponding MIB objects being 'read-only'
# See snmpd.conf(5) for more details
sysLocation Sitting on the Dock of the Bay
sysContact Me <me#example.org>
# Application + End-to-End layers
sysServices 72
#
# Process Monitoring
#
# At least one 'mountd' process
proc mountd
# No more than 4 'ntalkd' processes - 0 is OK
proc ntalkd 4
# At least one 'sendmail' process, but no more than 10
proc sendmail 10 1
# Walk the UCD-SNMP-MIB::prTable to see the resulting output
# Note that this table will be empty if there are no "proc" entries in the snmpd.conf file
#
# Disk Monitoring
#
# 10MBs required on root disk, 5% free on /var, 10% free on all other disks
disk / 10000
disk /var 5%
includeAllDisks 10%
# Walk the UCD-SNMP-MIB::dskTable to see the resulting output
# Note that this table will be empty if there are no "disk" entries in the snmpd.conf file
#
# System Load
#
# Unacceptable 1-, 5-, and 15-minute load averages
load 12 10 5
# Walk the UCD-SNMP-MIB::laTable to see the resulting output
# Note that this table *will* be populated, even without a "load" entry in the snmpd.conf file
###############################################################################
#
# ACTIVE MONITORING
#
# send SNMPv1 traps
trapsink localhost public
# send SNMPv2c traps
#trap2sink localhost public
# send SNMPv2c INFORMs
#informsink localhost public
# Note that you typically only want *one* of these three lines
# Uncommenting two (or all three) will result in multiple copies of each notification.
#
# Event MIB - automatically generate alerts
#
# Remember to activate the 'createUser' lines above
iquerySecName internalUser
rouser internalUser
# generate traps on UCD error conditions
defaultMonitors yes
# generate traps on linkUp/Down
linkUpDownNotifications yes
###############################################################################
#
# EXTENDING THE AGENT
#
#
# Arbitrary extension commands
#
extend test1 /bin/echo Hello, world!
extend-sh test2 echo Hello, world! ; echo Hi there ; exit 35
#extend-sh test3 /bin/sh /tmp/shtest
# Note that this last entry requires the script '/tmp/shtest' to be created first,
# containing the same three shell commands, before the line is uncommented
# Walk the NET-SNMP-EXTEND-MIB tables (nsExtendConfigTable, nsExtendOutput1Table
# and nsExtendOutput2Table) to see the resulting output
# Note that the "extend" directive supercedes the previous "exec" and "sh" directives
# However, walking the UCD-SNMP-MIB::extTable should still returns the same output,
# as well as the fuller results in the above tables.
#
# "Pass-through" MIB extension command
#
#pass .1.3.6.1.4.1.8072.2.255 /bin/sh PREFIX/local/passtest
#pass .1.3.6.1.4.1.8072.2.255 /usr/bin/perl PREFIX/local/passtest.pl
# Note that this requires one of the two 'passtest' scripts to be installed first,
# before the appropriate line is uncommented.
# These scripts can be found in the 'local' directory of the source distribution,
# and are not installed automatically.
# Walk the NET-SNMP-PASS-MIB::netSnmpPassExamples subtree to see the resulting output
#
# AgentX Sub-agents
#
# Run as an AgentX master agent
master agentx
# Listen for network connections (from localhost)
# rather than the default named socket /var/agentx/master
#agentXSocket tcp:localhost:705
rouser neutg
Here is my persistant configuration file /var/net-snmp/snmpd.conf
createUser neutg SHA "password" AES passphrase
The command I run is :
snmpget -u neutg -A password -a SHA -X 'passphrase'
-x AES -l authPriv localhost -v 3 1.3.6.1.2.1.1
I don't understand why it do not take in count my user. (I did restart the snmpd after entering the user - multiple times!)
The version of net-snmp I use :
Thanks in advance :)
After many research I've found what the problem is.
snmpd was not taking in count my configuration files. I saw it using the command :
snmpd -Dread_config -H 2>&1 | grep "Reading" | sort -u
Which tells you which configurations files are loaded by snmpd.
You can see it as well looking at the configuration file /var/lib/snmp/snmpd.conf. When snmpd handle your users it creates special lines in the file. It looks like :
usmUser 1 3 0x80001f888074336938f74f7c5a00000000 "neutg" "neutg" NULL .1.3.6.1.6.3.10.1.1.3 0xf965e4ab0f35eebb3f0e3b30\
6bc0797c025821c5 .1.3.6.1.6.3.10.1.2.4 0xe277044beccd9991d70144c4c8f4b672 0x
usmUser 1 3 0x80001f888074336938f74f7c5a00000000 "myuser" "myuser" NULL .1.3.6.1.6.3.10.1.1.2 0x2223c2d00758353b7c3076\
236be02152 .1.3.6.1.6.3.10.1.2.2 0x2223c2d00758353b7c3076236be02152 0x
setserialno 1424757026
So if you do not see any usmUser it's probably that your badly added your users.
The soluce
sudo /usr/local/sbin/snmpd -c /var/net-snmp/snmpd.conf -c /usr/local/share/snmp/snmpd.conf