Raspberry pi ca-certificates issue wget - raspberry-pi

I want to make sudo rpi-update command (firmware updater).
I tried many ways , time is set correctly and have install ca-certificates, but I still have following error
!!! Failed to download update for rpi-update
!!! Make sure you have ca-certificates installed and that the time is set corrctly

I tried every proposed solution. None of them worked except for purge.
sudo apt-get purge ca-certificates
sudo apt-get install ca-certificates
I ended up running
sudo autoremove
which removed a bunch of old packages that weren't previously marked for removal.
sudo rpi-update
Update then worked. I'm not sure if this is a correct or elegant solution but it was the only action that worked. This post may be old but hope that it will help someone since I have had this issue recently and for a long time.

Related

Is it possible to restore Gitlab-CE from files from previous installation?

recently I was forced to migrate from VPS hosting I had. I was away from home when I was asked to perform backups before the end of lease, so I had no time to backup all of the data in a way that backup should've been performed and that left me with all of my VPS data packed into a single .tar.gz.
Almost everything went smooth, the last thing that gives me nightmares is restoring my previous Gitlab-CE data.
This time I went with the dockerized version of Gitlab-CE. (gitlab/gitlab-ce:12.4.0-ce.0)
To be sure that I am not using wrong version during this process, I extracted which version I had installed from one of license files found in the backup.
I have confirmed that this image works without my data being mounted to volumes, but whenever I've tried to mount volumes recovered from backup:
/home/gitlab/logs:/var/log/gitlab
/home/gitlab/data:/var/opt/gitlab
/home/gitlab/config:/etc/gitlab
I was stuck with a fatal error regarding postgres db:
[execute] psql: FATAL: database locale is incompatible with operating system
DETAIL: The database was initialized with LC_COLLATE "en_US.UTF-8", which is not recognized by setlocale().
HINT: Recreate the database with another locale or install the missing locale.
I have tried setting ENV variables LANG, LANGUAGE, LC_ALL to different values without any luck.
I was not able to find solution on the internet so far, I wish I could've made a backup properly.
My wish is to restore old repositories (and if possible user accounts) from old installation I have stored.
Anything that could lead me into possible solution is very much appreciated!
This is my first question on stack so please forgive me if it is formed improperly, or is being asked in a wrong section.
Solution marked as an accepted answer worked beautifully, for further readers and to summarize, what was the final solution:
I have created an image using Dockerfile based on specific version of official gitlab-ce docker image and included locales packages.
Dockerfile in question:
FROM gitlab/gitlab-ce:12.4.0-ce.0
# Update packages
RUN apt-get update
# Install common software to aquire add-apt-repository command
RUN apt-get -y install software-properties-common
# Update packages
RUN apt-get update
# Add universe repository to get access to locales package
RUN add-apt-repository universe
# Update packages
RUN apt-get update
# Install missing locales & generate en_US one
RUN apt-get -y install -yqq locales locales-all tzdata && locale-gen en_US.UTF-8
# Continue with official gitlab-ce commands from their image...
# Wrapper to handle signal, trigger runit and reconfigure GitLab
CMD ["/assets/wrapper"]
HEALTHCHECK --interval=60s --timeout=30s --retries=5 \
CMD /opt/gitlab/bin/gitlab-healthcheck --fail --max-time 10
Then it was just simple
docker build - < Dockerfile
If it is bloated, or something is not necessary feel free to correct me, otherwise it is what worked for me in the end.
Thanks once again to #Vonc for providing me with what lead to the final solution!
As in this thread, try and check if locales/tzdata are missing:
apt-get update && apt-get install -yqq locales tzdata && locale-gen en_US.UTF-8
This is similar to postgres-ai/custom-images issues/4, which means, if it works at runtime, you might have to build your own GitLab image, with:
RUN apt-get install -yqq locales locales-all tzdata && locale-gen en_US.UTF-8

Timescale not finding pg_config on AMI

I created a machine in AWS Cloud9 and I want to install timescale on that instance. I have previously installed and setup postgres 9.6 using yum.
OS version is:
Amazon Linux AMI release 2018.03
.
When I run 'which pg_config', it is found here:
/usr/bin/pg_config
Looking at the install instructions on the timescale website, I came up with this:
sudo yum install -y
https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-ami201503-96-9.6-2.noarch.rpm
wget
https://timescalereleases.blob.core.windows.net/rpm/timescaledb-0.9.2-postgresql-9.6-0.x86_64.rpm
sudo yum install timescaledb-0.9.2-postgresql-9.6-0.x86_64.rpm
after the last command I get the following error:
Running transaction Installing :
timescaledb-0.9.2-0.el7.centos.x86_64
1/1 ERROR: Could not find pg_config, expected it at
/usr/pgsql-9.6/bin/pg_config. Please fix and try again.
warning: %post(timescaledb-0.9.2-0.el7.centos.x86_64) scriptlet
failed, exit status 1 Non-fatal POSTIN scriptlet failure in rpm
package timescaledb-0.9.2-0.el7.centos.x86_64
Do you have any more details on how you installed PostgreSQL on CentOS? I suspect this may have something to do with a mismatch between your pg_config installation and your PostgreSQL 9.6 installation, similar to the user in this issue.
I'd recommend uninstalling your current postgresql-devel and explicitly installing it for 9.6:
yum install postgresql96-devel
It seems the AMI is setup a bit differently than a normal CentOS install so PostgreSQL is installed in a different place than our installer expected. I've gone ahead and updated the RPMs to use a more robust method of finding the correct place to put the files. If you could re-download the latest RPM and confirm that it works that'd be great.
I've met exactly the same problem.
I solve this by manually linking them together.
sudo ln -s /usr/lib64/pgsql96/bin/pg_config /usr/pgsql-9.6/bin/pg_config

ubuntu 16.04 LTS login loop after updating driver nvidia-396

I have an issue on login to my computer when nvidia-396 is installed. It returns to login screen after giving error message pop up. When I remove the nvidia* and restart lightdm it works fine.
Could you please help me fixing this.
Thanks.
I had the same issue with this driver.
my system is:
Nvidia gtx 1060 (6gb)
AMD Fx 8350
ASUS motherboard
I was using the 390 driver ( 394.48 ), then upgraded to 396 and got this 'lightdm<->nvidia driver' problem.
It seems that mostly users are getting this bug too.
Unfortunately there's no solution yet, the nvidia-396 driver is still in beta according to Nvidia drivers page. Just purge the 396 driver and switch back to an older version, then everything should work fine.
If not, see this askubuntu question and this Nvidia topic (only step 2, 4 and 5 are necessary for you, but yet the whole tutorial may become usefull) , it helped me to get the drivers working again after i messed up badly some files and packages.
This is what i did, no login screen after upgrading the Nvidia driver, it works for me.
login with console Ctrl+Alt+F1
login as root and remove the read-only file system by mounting mount -o remount,rw /
stop lightdm , /ete/init.d/lightdm stop. (if this is in inactive(dead) just copy backup xorg.conf.new file in your root directory and copy file to /ete/X11/xorg.conf and reboot)
then remove old nvidia Drivers, apt-get remove --purge nvidia-*
add the driver repository add-apt-repository ppa:graphics-drivers/ppa
apt-get update
apt-get install nvidia-387
apt-get install ubuntu-desktop
start lightdm /etc/init.d/lightdb start. or reboot.(Finished)
I was able to fix by fully removing nvidia drivers with bumblebee.
sudo apt purge nvidia* bumblebee
And reinstalling
sudo apt install nvidia-396
Problem description
Nvidia-396, which you have installed intently or unawarely auto-installed by other related package, such as swig, can not properly used in ubuntu 16.04.
Solution
The best way to solve the problem would be ever find the miss-operation firstly. To do this, firstly, you need to check your command history by :
vi ~/.bash_history
and then search "sudo" keywords which indicate essential command, and find suspects. In my case, it is
sudo install swig
Finally, revert it by :
sudo apt-get purge swig
CAUTION : PLEASE NEVER DO
sudo apt-get upgrade
It will install newest package of your whole system which will include nivida-396
For me I just deleted the .Xauthority and two more of them having different suffixes from my home folder and it was working again fine!

How to Install Metasploitable on External Device

I want to Install metasploitable os on an external device like a computer or Raspberry Pi.
is it possible?
I download that but it have ".vmdk" format and it's not ".iso".
how can I convert it to iso or how can I have this OS on a computer?
Thanks.
There's a step or two but this should cover just about the whole process, it's been a while since I've tried it but I don't think much has changed since
sudo -i
wget http://downloads.metasploit.com/data/releases/framework-latest.tar.bz2
apt-get update; apt-get dist-upgrade
apt-get install ruby subversion libpcap
tar jxpf framework-latest.tar.bz2
cd msf3
./msfconsole
However if you want more check out http://www.pwnpi.com/ which has a few extra tools included

ownCloud "Downgrading not supported" after apt-get upgrade

I am running an ownCloud installation on Raspbian on an RPi2 and I just ran:
apt-get update
apt-get dist-upgrade
Now I get the following message when I try to go to my ownCloud-site in the browser:
Downgrading is not supported and is likely to cause unpredictable
issues (from 8.2.2.2 to 8.1.5.2)
I did not make any changes and definitely didn't do any downgrade (consciously). The files are stored on an external HDD and seem to be unaffected. I wasn't really actively using my cloud storage yet (fortunately), so I wouldn't really mind if the data I had put there was lost, but I'd like to keep the other data stored on the HDD (outside the ownCloud folder) if possible. What would you suggest as the best way forward? I thought about just removing ownCloud via apt-get purge - or would that be unsave or leave some junk on my system (I'd maybe have to delete the database manually)? And how can I avoid this problem in the future?
Try to connect to the official repository
wget -nv https://download.owncloud.org/download/repositories/stable/Ubuntu_14.04/Release.key -O Release.key
apt-key add - < Release.key
sh -c "echo 'deb http://download.owncloud.org/download/repositories/stable/Ubuntu_14.04/ /' >> /etc/apt/sources.list.d/owncloud.list"
apt-get update
apt-get dist-upgrade
same problem here - find out that i have old owncloud http in source list:
http://download.opensuse.org/repositories/isv:/ownCloud:/community/Debian_7.0/
new one is (check official page)
http://download.owncloud.org/download/repositories/stable/Debian_7.0/
change it, run upgrade again and manually disable maintenance mode (in /var/www/owncloud/config/config.php)
Same issue for me... Just this morning...
I think it's necessary to download current release from OwnCloud and re-install starting from it.
But before: I'll check if is possible to add OwnCloud as source of apt package, so a new apt-get update / apt-get upgrade will solve the problem and avoid future similar issues.
have a look at https://www.der-webcode.de/owncloud-manuelles-updateupgrade-von-owncloud/. Works fine for me. Don't forget to set Maintaince to false in your config.php.
Torsten
thank you fixed if still have problems try this
Stop the upgrade process this way:
cd /var/www/owncloud/
sudo -u www-data php occ maintenance:mode --off
And start the manual process:
sudo -u www-data php occ upgrade
If this does not work properly, try the repair function:
sudo -u www-data php occ maintenance:repair