Mediawiki with docker. Can't locate LocalSettings.php and cannot access database - docker-compose

I have setup a mediawiki (version 1.35.1) with a mariadb using docker on my Raspberry Pi3.
When I first run the docker-compose file everything works fine and I can create the LocalSettings.php.
Problem 1:
I copy the LocalSettings.php to the path /data/conf/LocalSettings.php, uncomment the line in the docker-compose.yml file and restart with docker-compose restart. But I always come back to the "Please complete the installation" page. It just works when i copy the file into the mediawiki container on path /var/www/html
Why is mediawiki not able to find the LocalSettings.php? Or am I understand something wrong?
Here is my docker-compose.yml file
version: '3.3'
services:
mediawiki:
image: mediawiki
restart: always
ports:
- ${BIND_TO}:${INT_PORT}
links:
- database
volumes:
- data_mw_images:/var/www/html/images
- ./data/sitemap:/var/www/html/sitemap
# After initial setup, download LocalSettings.php to data/conf directory
# and uncomment the following line and use compose to restart
# the mediawiki service
#- ./data/conf/LocalSettings.php:/var/www/html/LocalSettings.php:ro
# - ./data/conf/.htaccess:/var/www/html/.htaccess:ro
# Spezial Stuff (Google AdSense & Search Console)
#- ./data/conf/robots.txt:/var/www/html/robots.txt:ro
#- ./data/conf/ads.txt:/var/www/html/ads.txt:ro
# Mediwaiki Extensions
#- ./data/extensions/WikiCategoryTagCloud:/var/www/html/extensions/WikiCategoryTagCloud:ro
#- ./data/extensions/CategoryTagCloud:/var/www/html/extensions/CategoryTagCloud:ro
#- ./data/extensions/SelectCategory:/var/www/html/extensions/SelectCategory:ro
#- ./data/extensions/googleAnalytics:/var/www/html/extensions/googleAnalytics:ro
#- ./data/extensions/GoogleAdSense:/var/www/html/extensions/GoogleAdSense:ro
#- ./data/extensions/Lockdown:/var/www/html/extensions/Lockdown:ro
#- ./data/extensions/MobileFrontend:/var/www/html/extensions/MobileFrontend:ro
#- ./data/extensions/CookieWarning:/var/www/html/extensions/CookieWarning:ro
#- ./data/extensions/RelatedArticles:/var/www/html/extensions/RelatedArticles:ro
#- ./data/extensions/Description2:/var/www/html/extensions/Description2:ro
# Mediawiki Skins
#- ./data/skins/MinervaNeue:/var/www/html/skins/MinervaNeue:ro
database:
image: linuxserver/mariadb:arm32v6-latest
volumes:
- data_mw_db:/var/lib/mysql
restart: always
environment:
# #see https://phabricator.wikimedia.org/source/mediawiki/browse/master/includes/DefaultSettings.php
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_RANDOM_ROOT_PASSWORD: ${MYSQL_RANDOM_ROOT_PASSWORD}
TZ: ${TZ}
volumes:
data_mw_db:
data_mw_images:
Problem 2:
When I copy the LocalSettings.php into the container it works fine.
But when I restart my Pi and try to access the Mediawiki page i get another error (but the error appears not eveytime I restart. Sometimes it works... 🤦‍♂️)
Sorry! This site is experiencing technical difficulties.
Try waiting a few minutes and reloading.
(Cannot access the database)
Backtrace:
#0 /var/www/html/includes/libs/rdbms/loadbalancer/LoadBalancer.php(934): Wikimedia\Rdbms\LoadBalancer->reportConnectionError()
#1 /var/www/html/includes/libs/rdbms/loadbalancer/LoadBalancer.php(901): Wikimedia\Rdbms\LoadBalancer->getServerConnection(0, 'mywikidb', 0)
#2 /var/www/html/includes/libs/rdbms/loadbalancer/LoadBalancer.php(1046): Wikimedia\Rdbms\LoadBalancer->getConnection(-1, Array, 'mywikidb', 0)
#3 /var/www/html/includes/GlobalFunctions.php(2469): Wikimedia\Rdbms\LoadBalancer->getMaintenanceConnectionRef(-1, Array, 'mywikidb')
#4 /var/www/html/includes/cache/localisation/LCStoreDB.php(56): wfGetDB(-1)
#5 /var/www/html/includes/cache/localisation/LocalisationCache.php(449): LCStoreDB->get('en', 'deps')
#6 /var/www/html/includes/cache/localisation/LocalisationCache.php(495): LocalisationCache->isExpired('en')
#7 /var/www/html/includes/cache/localisation/LocalisationCache.php(414): LocalisationCache->initLanguage('en')
#8 /var/www/html/includes/cache/localisation/LocalisationCache.php(333): LocalisationCache->loadSubitem('en', 'messages', 'title-invalid-e...')
#9 /var/www/html/languages/Language.php(2645): LocalisationCache->getSubitem('en', 'messages', 'title-invalid-e...')
#10 /var/www/html/includes/cache/MessageCache.php(1030): Language->getMessage('title-invalid-e...')
#11 /var/www/html/includes/cache/MessageCache.php(988): MessageCache->getMessageForLang(Object(LanguageEn), 'title-invalid-e...', false, Array)
#12 /var/www/html/includes/cache/MessageCache.php(930): MessageCache->getMessageFromFallbackChain(Object(LanguageEn), 'title-invalid-e...', false)
#13 /var/www/html/includes/language/Message.php(1304): MessageCache->get('title-invalid-e...', false, Object(LanguageEn))
#14 /var/www/html/includes/language/Message.php(862): Message->fetchMessage()
#15 /var/www/html/includes/language/Message.php(954): Message->toString('text')
#16 /var/www/html/includes/title/MalformedTitleException.php(51): Message->text()
#17 /var/www/html/includes/title/MediaWikiTitleCodec.php(346): MalformedTitleException->__construct('title-invalid-e...', '')
#18 /var/www/html/includes/Title.php(3413): MediaWikiTitleCodec->splitTitleString('', 0)
#19 /var/www/html/includes/Title.php(427): Title->secureAndSplit('')
#20 /var/www/html/includes/MediaWiki.php(88): Title::newFromURL(NULL)
#21 /var/www/html/includes/MediaWiki.php(151): MediaWiki->parseTitle()
#22 /var/www/html/includes/MediaWiki.php(902): MediaWiki->getTitle()
#23 /var/www/html/includes/MediaWiki.php(543): MediaWiki->main()
#24 /var/www/html/index.php(53): MediaWiki->run()
#25 /var/www/html/index.php(46): wfIndexMain()
#26 {main}
Does anyone know how to fix that?
Here is the LocalSettings.php:
<?php
# This file was automatically generated by the MediaWiki 1.35.1
# installer. If you make manual changes, please keep track in case you
# need to recreate them later.
#
# See includes/DefaultSettings.php for all configurable settings
# and their default values, but don't forget to make changes in _this_
# file, not there.
#
# Further documentation for configuration settings may be found at:
# https://www.mediawiki.org/wiki/Manual:Configuration_settings
# Protect against web entry
if ( !defined( 'MEDIAWIKI' ) ) {
exit;
}
## Uncomment this to disable output compression
# $wgDisableOutputCompression = true;
$wgSitename = "Wiki Home";
$wgMetaNamespace = "Wiki_Home";
## The URL base path to the directory containing the wiki;
## defaults for all runtime URL paths are based off of this.
## For more information on customizing the URLs
## (like /w/index.php/Page_title to /wiki/Page_title) please see:
## https://www.mediawiki.org/wiki/Manual:Short_URL
$wgScriptPath = "";
## The protocol and server name to use in fully-qualified URLs
$wgServer = "http://192.168.178.22:9002";
## The URL path to static resources (images, scripts, etc.)
$wgResourceBasePath = $wgScriptPath;
## The URL paths to the logo. Make sure you change this from the default,
## or else you'll overwrite your logo when you upgrade!
$wgLogos = [ '1x' => "$wgResourceBasePath/resources/assets/wiki.png" ];
## UPO means: this is also a user preference option
$wgEnableEmail = true;
$wgEnableUserEmail = true; # UPO
$wgEmergencyContact = "apache#🌻.invalid";
$wgPasswordSender = "apache#🌻.invalid";
$wgEnotifUserTalk = false; # UPO
$wgEnotifWatchlist = false; # UPO
$wgEmailAuthentication = true;
## Database settings
$wgDBtype = "mysql";
$wgDBserver = "database";
$wgDBname = "wikidb";
$wgDBuser = "mediawikiuser";
$wgDBpassword = "xxx";
# MySQL specific settings
$wgDBprefix = "";
# MySQL table options to use during installation or update
$wgDBTableOptions = "ENGINE=InnoDB, DEFAULT CHARSET=binary";
# Shared database table
# This has no effect unless $wgSharedDB is also set.
$wgSharedTables[] = "actor";
## Shared memory settings
$wgMainCacheType = CACHE_ACCEL;
$wgMemCachedServers = [];
## To enable image uploads, make sure the 'images' directory
## is writable, then set this to true:
$wgEnableUploads = false;
$wgUseImageMagick = true;
$wgImageMagickConvertCommand = "/usr/bin/convert";
# InstantCommons allows wiki to use images from https://commons.wikimedia.org
$wgUseInstantCommons = false;
# Periodically send a pingback to https://www.mediawiki.org/ with basic data
# about this MediaWiki instance. The Wikimedia Foundation shares this data
# with MediaWiki developers to help guide future development efforts.
$wgPingback = true;
## If you use ImageMagick (or any other shell command) on a
## Linux server, this will need to be set to the name of an
## available UTF-8 locale. This should ideally be set to an English
## language locale so that the behaviour of C library functions will
## be consistent with typical installations. Use $wgLanguageCode to
## localise the wiki.
$wgShellLocale = "C.UTF-8";
## Set $wgCacheDirectory to a writable directory on the web server
## to make your wiki go slightly faster. The directory should not
## be publicly accessible from the web.
#$wgCacheDirectory = "$IP/cache";
# Site language code, should be one of the list in ./languages/data/Names.php
$wgLanguageCode = "en";
$wgSecretKey = "xxx";
# Changing this will log out all existing sessions.
$wgAuthenticationTokenVersion = "1";
# Site upgrade key. Must be set to a string (default provided) to turn on the
# web installer while LocalSettings.php is in place
$wgUpgradeKey = "cbbac7c99e42cd92";
## For attaching licensing metadata to pages, and displaying an
## appropriate copyright notice / icon. GNU Free Documentation
## License and Creative Commons licenses are supported so far.
$wgRightsPage = ""; # Set to the title of a wiki page that describes your license/copyright
$wgRightsUrl = "";
$wgRightsText = "";
$wgRightsIcon = "";
# Path to the GNU diff3 utility. Used for conflict resolution.
$wgDiff3 = "/usr/bin/diff3";
## Default skin: you can change the default skin. Use the internal symbolic
## names, ie 'vector', 'monobook':
$wgDefaultSkin = "vector";
# Enabled skins.
# The following skins were automatically enabled:
wfLoadSkin( 'MonoBook' );
wfLoadSkin( 'Timeless' );
wfLoadSkin( 'Vector' );
# End of automatically generated settings.
# Add more configuration options below.
$wgSessionCacheType = CACHE_DB;
Hope someone can help me.
Thanks and Regards

Related

How to reinitialize hashicorp vault

I'm working on an automating a hashicorp vault process, and I need to repeatedly run the vault operator init command because of trial and error testing, I tried uninstalling vault and installing it again, but it seems like that doesn't remove the previous unseal keys + root token it generates, how can I do this?
I read somewhere that I needed to delete my storage "file" path which I already did but its not working (Actually my /opt/vault/data/ directory is empty), here is my vault.hcl file:
# Full configuration options can be found at
https://www.vaultproject.io/docs/configuration
ui = true
#mlock = true
#disable_mlock = true
storage "file" {
path = "/opt/vault/data"
}
#storage "consul" {
# address = "127.0.0.1:8500"
# path = "vault"
#}
# HTTP listener
#listener "tcp" {
# address = "127.0.0.1:8200"
# tls_disable = 1
#}
# HTTPS listener
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/opt/vault/tls/tls.crt"
tls_key_file = "/opt/vault/tls/tls.key"
}
# Enterprise license_path
# This will be required for enterprise as of v1.8
#license_path = "/etc/vault.d/vault.hclic"
# Example AWS KMS auto unseal
#seal "awskms" {
# region = "us-east-1"
# kms_key_id = "REPLACE-ME"
#}
# Example HSM auto unseal
#seal "pkcs11" {
# lib = "/usr/vault/lib/libCryptoki2_64.so"
# slot = "0"
# pin = "AAAA-BBBB-CCCC-DDDD"
# key_label = "vault-hsm-key"
# hmac_key_label = "vault-hsm-hmac-key"
#}
Best practice for this type of setup is actually terraform or chef or any other stateful transformer. That way you can bring the environment to an ideal state (terraform apply) and easily removed (terraform destroy).
To reinit vault, you can bring it down, delete the data folder: "/opt/vault/data" in your case. Bring up another instance.
Delete /opt/vault/data
Reboot your computer
(You many also need to delete the file located at ~/.vault-token)
If you want to do the testing only why don't you use the vault in dev mode?

Capistrano deployment is not happenning after Server IP Change

Problem: Recently We have changed IP address of the stage server. We are using Capistrano for deploying rails application. So after changing server IP address when we run a command: cap develop(branch name) deploy, it is not working. Please find below config files
deploy.rb
# config valid for current version and patch releases of Capistrano
lock "~> 3.10.0"
set :application, "app_name"
set :repo_url, "git#bitbucket.org:repo.git"
set :branch, :develop
set :deploy_to, '/home/deploy/app_name'
set :pty, true
set :linked_files, %w{config/mongoid.yml config/application.yml}
set :linked_dirs, %w{ bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system public/uploads}
set :keep_releases, 5
set :rvm_type, :user
set :rvm_ruby_version, 'ruby-2.3.1' # Edit this if you are using MRI Ruby
set :bundle_binstubs, nil
set :puma_rackup, -> { File.join(current_path, 'config.ru') }
set :puma_state, "#{shared_path}/tmp/pids/puma.state"
set :puma_pid, "#{shared_path}/tmp/pids/puma.pid"
set :puma_bind, "unix://#{shared_path}/tmp/sockets/puma.sock" #accept array for multi-bind
set :puma_conf, "#{shared_path}/puma.rb"
set :puma_access_log, "#{shared_path}/log/puma_error.log"
set :puma_error_log, "#{shared_path}/log/puma_access.log"
set :puma_role, :app
set :puma_env, fetch(:rack_env, fetch(:rails_env, 'production'))
set :puma_threads, [0, 8]
set :puma_workers, 0
set :puma_worker_timeout, nil
set :puma_init_active_record, false
set :puma_preload_app, false
# Default branch is :master
# ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp
# Default deploy_to directory is /var/www/my_app_name
# set :deploy_to, "/var/www/my_app_name"
# Default value for :format is :airbrussh.
# set :format, :airbrussh
# You can configure the Airbrussh format using :format_options.
# These are the defaults.
# set :format_options, command_output: true, log_file: "log/capistrano.log", color: :auto, truncate: :auto
# Default value for :pty is false
# set :pty, true
# Default value for :linked_files is []
# append :linked_files, "config/database.yml", "config/secrets.yml"
# Default value for linked_dirs is []
# append :linked_dirs, "log", "tmp/pids", "tmp/cache", "tmp/sockets", "public/system"
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for local_user is ENV['USER']
# set :local_user, -> { `git config user.name`.chomp }
# Default value for keep_releases is 5
# set :keep_releases, 5
# Uncomment the following to require manually verifying the host key before first deploy.
# set :ssh_options, verify_host_key: :secure
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
config/deploy/develop.rb
# server-based syntax
# ======================
# Defines a single server with a list of roles and multiple properties.
# You can define all roles on a single server, or split them:
# server "example.com", user: "deploy", roles: %w{app db web}, my_property: :my_value
# server "example.com", user: "deploy", roles: %w{app web}, other_property: :other_value
# server "db.example.com", user: "deploy", roles: %w{db}
server '<new_ip>', user: 'deploy', roles: %w{web app db}
# role-based syntax
# ==================
# Defines a role with one or multiple servers. The primary server in each
# group is considered to be the first unless any hosts have the primary
# property set. Specify the username and a domain or IP for the server.
# Don't use `:all`, it's a meta role.
# role :app, %w{deploy#example.com}, my_property: :my_value
# role :web, %w{user1#primary.com user2#additional.com}, other_property: :other_value
# role :db, %w{deploy#example.com}
# Configuration
# =============
# You can set any configuration variable like in config/deploy.rb
# These variables are then only loaded and set in this stage.
# For available Capistrano configuration variables see the documentation page.
# http://capistranorb.com/documentation/getting-started/configuration/
# Feel free to add new variables to customise your setup.
# Custom SSH Options
# ==================
# You may pass any option but keep in mind that net/ssh understands a
# limited set of options, consult the Net::SSH documentation.
# http://net-ssh.github.io/net-ssh/classes/Net/SSH.html#method-c-start
#
# Global options
# --------------
# set :ssh_options, {
# keys: %w(/home/rlisowski/.ssh/id_rsa),
# forward_agent: false,
# auth_methods: %w(password)
# }
#
# The server-based syntax can be used to override options:
# ------------------------------------
# server "example.com",
# user: "user_name",
# roles: %w{web app},
# ssh_options: {
# user: "user_name", # overrides user setting above
# keys: %w(/home/user_name/.ssh/id_rsa),
# forward_agent: false,
# auth_methods: %w(publickey password)
# # password: "please use keys"
# }
Not sure what we are missing, Any help would be appreciated

getting timeout when submitting fat jar to spark-jobserver (akka.pattern.AskTimeoutException)

I have built my job jar using sbt assembly to have all dependencies in one jar. When I try to submit my binary to spark-jobserver I am getting akka.pattern.AskTimeoutException
I modified my configuration to be able to submit large jars (I added parsing.max-content-length = 300m to my configuration) I also increased some of timeouts in configuration but nothing helped.
After I run:
curl --data-binary #matching-ml-assembly-1.0.jar localhost:8090/jars/matching-ml
I am getting:
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/binary-manager#1785133213]] after [3000 ms]. Sender[null] sent message of type \"spark.jobserver.StoreBinary\".",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)", "akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)", "scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)", "scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:331)", "akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:282)", "akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:286)", "akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:238)", "java.lang.Thread.run(Thread.java:745)"]
}
My configuration:
# Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = "local[4]"
# master = "mesos://vm28-hulk-pub:5050"
# master = "yarn-client"
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 4
jobserver {
port = 8090
context-per-jvm = false
# Note: JobFileDAO is deprecated from v0.7.0 because of issues in
# production and will be removed in future, now defaults to H2 file.
jobdao = spark.jobserver.io.JobSqlDAO
filedao {
rootdir = /tmp/spark-jobserver/filedao/data
}
datadao {
# storage directory for files that are uploaded to the server
# via POST/data commands
rootdir = /tmp/spark-jobserver/upload
}
sqldao {
# Slick database driver, full classpath
slick-driver = slick.driver.H2Driver
# JDBC driver, full classpath
jdbc-driver = org.h2.Driver
# Directory where default H2 driver stores its data. Only needed for H2.
rootdir = /tmp/spark-jobserver/sqldao/data
# Full JDBC URL / init string, along with username and password. Sorry, needs to match above.
# Substitutions may be used to launch job-server, but leave it out here in the default or tests won't pass
jdbc {
url = "jdbc:h2:file:/tmp/spark-jobserver/sqldao/data/h2-db"
user = ""
password = ""
}
# DB connection pool settings
dbcp {
enabled = false
maxactive = 20
maxidle = 10
initialsize = 10
}
}
# When using chunked transfer encoding with scala Stream job results, this is the size of each chunk
result-chunk-size = 1m
}
# Predefined Spark contexts
# contexts {
# my-low-latency-context {
# num-cpu-cores = 1 # Number of cores to allocate. Required.
# memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, 1G, etc.
# }
# # define additional contexts here
# }
# Universal context configuration. These settings can be overridden, see README.md
context-settings {
num-cpu-cores = 2 # Number of cores to allocate. Required.
memory-per-node = 2G # Executor memory per node, -Xmx style eg 512m, #1G, etc.
# In case spark distribution should be accessed from HDFS (as opposed to being installed on every Mesos slave)
# spark.executor.uri = "hdfs://namenode:8020/apps/spark/spark.tgz"
# URIs of Jars to be loaded into the classpath for this context.
# Uris is a string list, or a string separated by commas ','
# dependent-jar-uris = ["file:///some/path/present/in/each/mesos/slave/somepackage.jar"]
# Add settings you wish to pass directly to the sparkConf as-is such as Hadoop connection
# settings that don't use the "spark." prefix
passthrough {
#es.nodes = "192.1.1.1"
}
}
# This needs to match SPARK_HOME for cluster SparkContexts to be created successfully
# home = "/home/spark/spark"
}
# Note that you can use this file to define settings not only for job server,
# but for your Spark jobs as well. Spark job configuration merges with this configuration file as defaults.
spray.can.server {
# uncomment the next lines for making this an HTTPS example
# ssl-encryption = on
# path to keystore
#keystore = "/some/path/sjs.jks"
#keystorePW = "changeit"
# see http://docs.oracle.com/javase/7/docs/technotes/guides/security/StandardNames.html#SSLContext for more examples
# typical are either SSL or TLS
encryptionType = "SSL"
keystoreType = "JKS"
# key manager factory provider
provider = "SunX509"
# ssl engine provider protocols
enabledProtocols = ["SSLv3", "TLSv1"]
idle-timeout = 60 s
request-timeout = 20 s
connecting-timeout = 5s
pipelining-limit = 2 # for maximum performance (prevents StopReading / ResumeReading messages to the IOBridge)
# Needed for HTTP/1.0 requests with missing Host headers
default-host-header = "spray.io:8765"
# Increase this in order to upload bigger job jars
parsing.max-content-length = 300m
}
akka {
remote.netty.tcp {
# This controls the maximum message size, including job results, that can be sent
# maximum-frame-size = 10 MiB
}
}
I came to the similar issue. The way how to solve it is a bit tricky. First you need to add spark.jobserver.short-timeout to your configuration. Just modify your configuration like this:
jobserver {
port = 8090
context-per-jvm = false
short-timeout = 60s
...
}
The second (tricky) part is you can't fix it without modifying code of the spark-job-application. The attribute which cause timeout is in class BinaryManager:
implicit val daoAskTimeout = Timeout(3 seconds)
The default is set to 3 second which apparently for big jar is not enough. You can increase it to for example 60 second which solve problem for me.
implicit val daoAskTimeout = Timeout(60 seconds)
Actually you can bring down the size of the jars easily. Also some of the dependent jars can be passed using dependent-jar-uris instead of assembling into one big fat jar.

Gitlab Nginx - Setting redirect http to https makes requests timeout

So, i just installed gitlab on my server. I'm running the bundled nginx on port 256, and i've set up https using let's ecnrypt. There's still a small problem, you can access it through a normal http address which will throw an nginx error since my external address is https://example.com:256. So i set the redirect_http_to_https setting and now all the requests just timeout... Any ideas?
My gitlab.rb config:
## Url on which GitLab will be reachable.
## For more details on configuring external_url see:
## https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/config$
external_url 'https://example.com:256'
#####################
# GitLab Web server #
#####################
## see: https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc/settings/nginx.md#using-a-non-bundled-web-server
## When bundled nginx is disabled we need to add the external webserver user to the GitLab webserver group.
# web_server['external_users'] = []
# web_server['username'] = 'gitlab-www'
# web_server['group'] = 'gitlab-www'
# web_server['uid'] = nil
# web_server['gid'] = nil
# web_server['shell'] = '/bin/false'
# web_server['home'] = '/var/opt/gitlab/nginx'
################
# GitLab Nginx #
################
## see: https://gitlab.com/gitlab-org/omnibus-gitlab/tree/master/doc/settings/nginx.md
nginx['enable'] = true
# nginx['client_max_body_size'] = '250m'
nginx['redirect_http_to_https'] = true
# nginx['redirect_http_to_https_port'] = 8080
# nginx['ssl_client_certificate'] = "/etc/gitlab/ssl/ca.crt" # Most root CA's are included by default
# nginx['ssl_verify_client'] = "off" # enable/disable 2-way SSL client authentication
# nginx['ssl_verify_depth'] = "1" # if ssl_verify_client on, verification depth in the client certificates chain
nginx['ssl_certificate'] = "/etc/letsencrypt/live/example.com-0001/fullchain.pem"
nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/example.com-0001/privkey.pem"
# nginx['ssl_ciphers'] = "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256"
# nginx['ssl_prefer_server_ciphers'] = "on"
# nginx['ssl_protocols'] = "TLSv1 TLSv1.1 TLSv1.2" # recommended by
https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html & https://cipherli.st/
# nginx['ssl_session_cache'] = "builtin:1000 shared:SSL:10m" # recommended in http://nginx.org/en/docs/http/ngx_http_ssl_module.html
# nginx['ssl_session_timeout'] = "5m" # default according to http://nginx.org/en/docs/http/ngx_http_ssl_module.html
# nginx['ssl_dhparam'] = nil # Path to dhparams.pem, eg. /etc/gitlab/ssl/dhparams.pem
# nginx['listen_addresses'] = ['*']
# nginx['listen_port'] = nil # override only if you use a reverse proxy: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#setting-the-nginx-listen-port
# nginx['listen_https'] = nil # override only if your reverse proxy internally communicates over HTTP: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md#supporting-proxied-ssl
nginx['custom_gitlab_server_config'] = "location ^~ /.well-known { root /var/www/letsencrypt; }"
# nginx['custom_nginx_config'] = "include /etc/nginx/conf.d/example.conf;"
# nginx['proxy_read_timeout'] = 3600
# nginx['proxy_connect_timeout'] = 300
# nginx['proxy_set_headers'] = {
# "Host" => "$http_host",
# "X-Real-IP" => "$remote_addr",
# "X-Forwarded-For" => "$proxy_add_x_forwarded_for",
# "X-Forwarded-Proto" => "https",
# "X-Forwarded-Ssl" => "on"
# }
# nginx['proxy_cache_path'] = 'proxy_cache keys_zone=gitlab:10m max_size=1g levels=1:2'
# nginx['proxy_cache'] = 'gitlab'
# nginx['http2_enabled'] = true
# nginx['real_ip_trusted_addresses'] = []
# nginx['real_ip_header'] = nil
# nginx['real_ip_recursive'] = nil
Uncomment:
# nginx['redirect_http_to_https_port'] = 8080
Make it port 80 like the following:
nginx['redirect_http_to_https_port'] = 80

Zurb foundation 5, Bower, Compass, Sinatra configuration

I'm trying to add Zurb-foundation with sass into my sinatra app using the foundation-cli found here. I generated a foundation project and got all the files generated. I commented out the config.rb so I could configure Compass via the app.rb ie mvp.rb.
I'm having trouble linking to the actual assets found in the bower_components dir. I could link everything like so get 'bower_components/file' do ... end but I think you can use compass to automatically use the files based on the configuration. I'm not sure how to though.
In my haml views I have links to scripts that are not linked currently like %script{ src: 'js/jquery.js' }
config.rb
# Require any additional compass plugins here.
add_import_path "bower_components/foundation/scss"
# Set this to the root of your project when deployed:
# http_path = "/"
# css_dir = "stylesheets"
# sass_dir = "scss"
# images_dir = "images"
# javascripts_dir = "javascripts"
# You can select your preferred output style here (can be overridden via the command line):
# output_style = :expanded or :nested or :compact or :compressed
# To enable relative paths to assets via compass helper functions. Uncomment:
# relative_assets = true
# To disable debugging comments that display the original location of your selectors. Uncomment:
# line_comments = false
# If you prefer the indented syntax, you might want to regenerate this
# project again passing --syntax sass, or you can uncomment this:
# preferred_syntax = :sass
# and then run:
# sass-convert -R --from scss --to sass sass scss && rm -rf sass && mv scss sass
app directory
Gemfile config.rb public
Gemfile.lock index.html stylesheets
js views bower.json
mvp.rb bower_components
mvp.rb
1#!/usr/bin/env ruby
2# -*- coding: utf-8 -*-
3require 'rubygems'
4require 'bundler/setup'
5require 'sinatra'
6require 'haml'
7require 'compass'
8
9configure do
10 Compass.configuration do |config|
11 config.project_path = File.dirname(__FILE__)
12 config.sass_dir = File.join '/views/scss'
13 config.images_dir = 'views', 'images'
14 config.http_path = '/'
15 config.http_images_path = '/images'
16 config.http_stylesheets_path = '/stylesheets'
17 config.javascripts_dir = "js"
18 end
19
20 set :haml, { format: :html5 }
21 set :scss, { style: :compact, debug_info: false }
22end
23
24get '/stylesheets/:name.css' do |path|
25 content_type "text/css", charset: "utf-8"
26 scss :"scss/#{params[:name]}", Compass.sass_engine_options
27end