I am new to truenas. I want to backup a folder in windows PC to truenas server through cwrsync. when connecting to the truenas got the following error.
rsync: connection unexpectedly closed (0 bytes received so far) \[Receiver\]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) \[Receiver=3.1.3\]
rsync: connection unexpectedly closed (0 bytes received so far) \[sender\]
rsync error: error in rsync protocol data stream (code 12) at io.c(228) \[sender=3.2.3\]
any thought on this?
enable the SSH in truenas server. and the command i used is,
rsync -v -a -H -h /cygdrive/c/work/ test#192.168.5.200:/mnt/test_path/test
Related
I am using Docker compose to build images of python and MSSQL and making connection between DB and python app and added DB container as a server in python.connection file but getting errors like
pyodbc.OperationalError: ('08001', '[08001] [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
Adaptive Server is unavailable or does not exist
'DRIVER={FreeTDS};''SERVER=MSSQL_DB;''PORT=1433;' 'DATABASE=MYDATABASE;''TDS_Version=7.4', autocommit=True) my_python_app | pyodbc.OperationalError: ('08001', '[08001] [FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
I am trying to deploy elixir application on Docker container, it is successfully deployed but when I run an API endpoint it shows following error
{
errors: {
detail: "Internal Server Error"
}
}
after check logs docker logs [CONTAINER_ID] the following errors occurred (it cannot connect to database)
my database is on AWS Aurora and I am able to connect to it using pgadmin
18:21:26.624 [error] #PID<0.652.0> running ABC.Endpoint (connection #PID<0.651.0>, stream id 1) terminated
Server: localhost:4000 (http)
Request: GET /api/v1/popular-searches/us/en
** (exit) an exception was raised:
** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 2850ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:
1. Tracking down slow queries and making sure they are running fast enough
2. Increasing the pool_size (albeit it increases resource consumption)
3. Allowing requests to wait longer by increasing :queue_target and :queue_interval
See DBConnection.start_link/2 for more information
(ecto_sql 3.5.3) lib/ecto/adapters/sql.ex:751: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto_sql 3.5.3) lib/ecto/adapters/sql.ex:684: Ecto.Adapters.SQL.execute/5
(ecto 3.5.5) lib/ecto/repo/queryable.ex:229: Ecto.Repo.Queryable.execute/4
(ecto 3.5.5) lib/ecto/repo/queryable.ex:17: Ecto.Repo.Queryable.all/3
(ecto 3.5.5) lib/ecto/repo/queryable.ex:157: Ecto.Repo.Queryable.one!/3
(_api 0.1.1) lib/api_web/controllers/V1/cms_data_controller.ex:14: ApiWeb.V1.CMSDataController.get_popular_searches/2
(_api 0.1.1) lib/_api_web/controllers/V1/cms_data_controller.ex:1: ApiWeb.V1.CMSDataController.action/2
(_api 0.1.1) lib/_api_web/controllers/V1/cms_data_controller.ex:1: ApiWeb.V1.CMSDataController.phoenix_controller_pipeline/2
18:21:26.695 [error] Postgrex.Protocol (#PID<0.487.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.714 [error] Postgrex.Protocol (#PID<0.479.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.718 [error] Postgrex.Protocol (#PID<0.469.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
18:21:26.810 [error] Postgrex.Protocol (#PID<0.493.0>) failed to connect: ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :
I have checked the env variables in the container which are correct, database URL is correct, my Docker file looks file this:
# base image elixer to start with
FROM elixir:1.13.4
# install hex package manager
RUN mix local.hex --force
RUN curl -o phoenix_new.ez https://github.com/phoenixframework/archives/raw/master/phoenix_new.ez
RUN mix archive.install ./phoenix_new.ez
RUN mkdir /app
COPY . /app
WORKDIR /app
ENV MIX_ENV=prod
ENV PORT=4000
ENV DATABASE_URL=postgres://[URL]
RUN mix local.rebar --force
RUN mix deps.get --only prod
RUN mix compile
RUN mix phx.digest
CMD mix phx.server
after building the images i start it like this:
docker build . -t [name]
docker run --name [name] -p 4000:4000 -d [name]
What am I doing wrong?
any help would be appreciated.
I am writing one shell script which have below command to copy the code files from remote server to local server by skipping some file but it gives errors like :
Command :
rsync -avz --delete --exclude=**/cache --exclude=**/administrator/cache/ --exclude=**/tmp --exclude=**/configuration.php -e ssh $REMOTE_USER#$REMOTE_SERVER:$REMOTE_PATH $LOCAL_PATH
Errors :
1) rsync: mkstemp "/var/www/test.domainname/public/.sript.php.4FRyfv" failed: Permission denied (13)
2) rsync: mkstemp "/var/www/test.domainname/public/.access.txt.PECuqA" failed: Permission denied (13)
3) rsync: failed to set times on "/var/www/test.domainname/public/administrator/components/com_bconnect": Operation not permitted (1)
administrator/components/com_bconnect/
4) rsync: mkstemp "/var/www/test.domainname/public/administrator/components/com_bconnect/.config.xml.8LWLWF" failed: Permission denied (13)
Can you please help me out for above 4 errors.
I've just come across this error in a different form, i.e. attempting to rsync to a nested directory in /var/www/html on a remote server and not having write permission to the /var/www/html directory itself. In my case my error turned out to be due to mangled rsync syntax, but in your case you probably don't have permission to write to /var/www. This is where rsync is attempting to create its temporary files.
From my understanding you have two options:
use the
--temp-dir
parameter
use the
--inplace
parameter.
This is explained in the rsync man page and has also been asked before.
This is basically a permission issue for the remote/local directory to/from which one invites the data. I solved this error in a slightly differenr form again! I was sending files from the local ubuntu machine to the remote ubuntu machine through this.
rsync -arvz -e 'ssh -p 64060' ./SE-D-20-00279R2.pdf yogender#<IP>:</path/to/destination>
I had the same permission error:
rsync: mkstemp "<path/of/the/file2/you/try/to/rsync>" failed: Permission denied (13)
....
....
....
rsync: mkstemp "<path/of/the/file2/you/try/to/rsync>" failed: Permission denied (13)
After fiddling around I find this solution:
Problem: the receiver doesn't know the permission with the sender
solution: Need to create the folder at the receiver side (big machine) with sudo.
sudo mkdir </path/to/directory/you/are/sending/file/to/remote/machine>
Give the permission to the folder, which will send the data to this directory:
sudo chown yogender </path/to/directory/you/are/sending/file/to/remote/machine>
yogender here is the admin or user. you can get it by doing "whoami" on remote machine (when you are sending data from local to remote)
Then simply:
rsync -zaP -e 'ssh -p 64060' ./SE-D-20-00279R2.pdf yogender#147.32.99.72:</path/to/directory/you/are/sending/file/to/remote/machine>>
Frequently getting this error message in postgres log - "failed to set up event for socket: error code 10038". Because of this connection attempt getting failed.
I have uninstalled and reinstalled postgres 9.6. I have included postgres.exe file in the exclude file list for antivirus scan. Is there any solutions for this.
I'm trying to run nBackup Firebird through command line below
C:\nbackup.exe -U sysdba -P masterkey -B 0 C:\nomedobanco.FDB C:\nomedobackup.TMPNBK
but it appears to me the following error
The application failed to initialize properly (0xc000007b). Click OK to close the application.
What to do? Someone has gone through something similar?