Not able to create a admin user in Git Lab server - docker-compose

I have installed gitlab server using docker compose but after the installation I am not getting a default screen where we create first Admin user. Instead of that it is asking me to enter username and password.

If I see this correctly, the gitlab_root_password should be set in the Compose file. This should then be used to log in for the first time.
Source: https://docs.gitlab.com/ee/install/docker.html#install-gitlab-using-docker-swarm-mode

Related

Can't create user for ceph dashboard

I'm trying to create a user for ceph dashboard with admin role. Version is Nautilus 14.2.19 and deployed with manuel installation.
I've installed dashboard module, installed all dependencies and enabled it. My dashboard is reachable from the monitor ip and default port of 8443.
When I run te command:
ceph dashboard ac-user-create <user> <pw> administrator
I get the following error:
Please specify the file containing the password/secret with "-i" option.
After digging for information about this it says there must be a file in bcrypt format. Is there a default created file for this? Or if it's needed to create one how can I do it?
Nevermind, it seems you just need to create a text file and write your password in it.
When you run the command like this:
ceph dashboard ac-user-create <user> -i /file/location administrator
It creates the user and applies the password in an encrypted format.

How to login: "After following the steps in firecracker custom rootfs using alpine"

I have followed the below steps for creating a custom rootfs image for booting with firecracker:-
https://github.com/firecracker-microvm/firecracker/blob/master/docs/rootfs-and-kernel-setup.md
Once the VM is up, it asks for the login username and password.
I have tried root/root just like the one provided in the hello-rootfs image provided in examples. But unable to login via the same credentials.
Do we need to add any other module / configuration apart from the steps mentioned in the doc for user login?
You can use chroot to set root password inside the container:
chroot /my-rootfs
passwd

Cannot login to keycloak admin console when running in domain cluster mode

Following the documentation guide, I have booted up a master and slave and I can see it connected via the logs:
Boot up master
$ domain.sh --host-config=host-master.xml
Boot up slave
$ domain.sh --host-config=host-slave.xml
I've also followed the steps to set up the admin user via the add-user.sh. Further research indicated that I should use the add-user-keycloak.sh script to add an initial admin user:
./add-user-keycloak.sh -u john
Press ctrl-d (Unix) or ctrl-z (Windows) to exit
Password:
Added 'john' to '../standalone/configuration/keycloak-add-user.json', restart server to load user
Reran the master and slave, but cannot login to admin console.
However, what's interesting is when I tried to boot up in standalone mode I was able to the admin console as john:
./standalone.sh
Is this a bug or am I missing something (most likely) that's not in the documentation?
Thanks in advance...
Figured it out, hope this helps somebody.
Before you start in domain cluster mode:
./domain --host-config=host-master.xml
./domain --host-config=host-slave.xml
you must first create the admin so you can log in to admin console using the --sc tag, otherwise add-user-keycloak.sh only adds the admin user for the standalone mode. To do that:
./add-user-keycloak.sh --sc ../domain/servers/server-one/configuration -u john -p password
if configuration folder does not exist, then create the directory.
The ./add-user-keycloak.sh script seems to be a little outdated. Currently (as of Keycloak 12.0.2 version) it creates keycloak-add-user.json file in ./domain/configuration/ directory - That is wrong!
The file should be in ./domain/servers/server-one/configuration.
Now you just have to move the file to that directory, restart the server and it should work properly.
I found this solution on this 2-year old email thread:
https://lists.jboss.org/pipermail/keycloak-user/2018-January/012642.html

Mattermost "An existing user is already attached to your gitlab account"

We use Mattermost using the 'Production Docker' setup as described in Mattermost documentation. For authentication, we federate using GitHub:Enterprise.
To setup our Mattermost team, I imported the whole Slack history. This lead to the problem that everyone who did not yet log into Mattermost via GitHub:Enterprise was not able to login. Mattermost helpfully returned the error message
"An existing user is already attached to your gitlab account"
How can I fix this issue without having to setup a new Mattermost instance and force everyone to login once before importing the Slack data?
Prerequisites
In order for this to work, you need
GitHub:Enterprise Administrator permissions
On the Mattermost machine, either root permissions or an account that is allowed to control docker, and, if psql is not installed, a way to install the psql command-line tool.
Steps
ssh into the Mattermost vm/machine (where the mattermost docker containers are running).
Change to an account with docker permissions (root; or the account you setup during Mattermost installation; or ... )
Use docker ps and note the hash of the container mattermostdocker_db. We will assume it starts with 5c23.
Run docker inspect 5c23 | grep IPAddress. Note the IP address of the container. We will assume it is 172.17.0.2.
Ensure that the psql commandline tool is installed on the machine where mattermost/docker is running.
On debian: apt-get install postgresql-client
Connect to the mattermost db of postgresql running inside the docker container:
psql -h 172.17.0.2 -p 5432 -d mattermost -U postgres -W
The (default?) password seems to be postgres.
Verify that a user account with the correct email exists. Assume the email of the account that has the problem is 'john#example.com`
mattermost-# select email, authdata from users where email = 'john#example.com';
Connect to GitHub:Enterprise and open the admin console. We will assume the local github enterprise instance is at https://github.example.com.
Click on the rocket symbol, or
https://github.example.com/stafftools
Click on all users and find the user that cannot login. We assume the github username is john, which would correspond to https://github.example.com/john
Visit the stafftools user security page for that user.
https://github.example.com/stafftools/users/john/security
Click on the 'Search logs' link under the 'Audit logs' header. This will open a page with a query field. On this page, you will find the internal github user number for that user. Note this number. We will assume the number is 37.
Back in the psql console, update the user entry with the correct number:
update users set authservice = 'gitlab', authdata = '37' where email = 'john#example.com' ;
Exit the psql console with \q:
mattermost-# \q
Done. The user can now log into Mattermost with GitHub:Enterprise user authentication.
Notes
Don't forget to complete each statement in psql with a ;
It's gitlab, not github, even if you use GitHub:Enterprise
Tested with Mattermost 3.0, GitHub:Enterprise 2.6.2

capistrano insisting on password

First, my teammate is successfully deploying on almost exactly the same setup and using the exact same config as me re deploy. Therefore, cannot be a deploy configuration issue, there is nothing local or unique to any of our machines.
Second, I can successfully login via my machine using ssh user#server.com without password prompt.
However, I have tried everything to stop capistrano asking this question:
--recursive; fi"
servers: ["myserver.com"]
Password:
* [deploy:update_code] rolling back
I have tried every single password I have, and not entering a password. I don't even know what this password is for. Is it SSH? Because I don't even have a password protected key file.
I'm totally lost and I've literally been debugging this for 5 hours now without a single change in status. I'd really appreciate some help on how I can find out what the problem is.
Note, cap deploy simply works for my teammate using same config, same server. Everything, except different key file (note mine works and tested via ssh command).
Do you have to specify user#server.com to SSH to your server successfully (i.e., do you have a different username on your remote server from your local machine)?
You might just need to tell Capistrano what username it should be using to connect with by adding it to your deploy.rb:
set :user, "your-username"
You could also change the default username SSH will pick for that server by using ~/.ssh/config:
Host your.server.name
User your-username