My objective is to execute a rake task on my apps running in Opsworks.
It appears to me that my opsworks cookbook is not running rake from the correct directory.
How can I tell the cookbook to run in the app home dir (so it can pick up the Gemfile)?
Do I need to specify an RAILS_ENV?
My cookbooks default.rb:
Chef::Log.info("****** Audit Photo URLS : Running Rake Task ******")
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
end
Errors from Opsworks log:
[2014-11-28T18:36:33+00:00] INFO: Running queued delayed notifications before re-raising exception
[2014-11-28T18:36:33+00:00] ERROR: Running exception handlers
[2014-11-28T18:36:33+00:00] ERROR: Exception handlers complete
[2014-11-28T18:36:33+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache.stage2/chef-stacktrace.out
[2014-11-28T18:36:33+00:00] ERROR: execute[rake audit:audi_image_urls] (auditphoto::default line 3) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '10'
---- Begin output of bundle exec rake audit:audi_image_urls ----
STDOUT: Could not locate Gemfile
STDERR:
---- End output of bundle exec rake audit:audi_image_urls ----
Ran bundle exec rake audit:audi_image_urls returned 10
[2014-11-28T18:36:33+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
The execute resource can take a cwd attribute for the working directory from which the command is run.
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
cwd '/over/there'
environment 'RAILS_ENV' => 'production'
end
OpsWorks Deploy events and Execute Recipes commands
Chef 11
OpsWorks populates node[:deploy]['appshortname'] for Deploy events and Execute Recipes stack commands to house each applications configuration. With this data you could use:
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
cwd node[:deploy]['appshortname'][:deploy_to]
user node[:deploy]['appshortname'][:user]
group node[:deploy]['appshortname'][:group]
environment( { 'RAILS_ENV' => node[:deploy]['appshortname'][:rails_env] } )
end
You may want to source :environment_variables for the environment if you have anything related configured there.
Chef 12
From the AWS stack settings docco
In Chef 12 Linux, stack settings are available as Chef data bags and are accessed only through Chef search. Data bags are stored on AWS OpsWorks Stacks instances in a set of JSON files in the /var/chef/runs/run-ID/data_bags directory, where run-ID is a unique ID that AWS OpsWorks Stacks assigns to each Chef run on an instance. Stack settings are no longer available as Chef attributes, so stack settings can no longer be accessed through the Chef node object. For more information, see the AWS OpsWorks Stacks Data Bag Reference.
app = search("aws_opsworks_app").first
execute "rake audit:audi_image_urls" do
command "bundle exec rake audit:audi_image_urls"
cwd app['app_source']['deploy_to']
user app['app_source']['user']
group app['app_source']['group']
environment( { 'RAILS_ENV' => app['app_source']['rails_env'] } )
end
Other events and commands
It looks like OpsWorks runs a little differently to a normal Chef server and supplies it's own JSON blob to a local chef instance for each run which means (as you mentioned) the :deploy attributes will be missing for other events/commands Amazon chooses not to supply JSON for.
It might be possible, but very hacky and prone to breakage, to populate the :deploy attributes from the last JSON file that contains deploy state: {"deploy": { "app_name": { "application": "app_name" } in /var/lib/aws/opsworks/chef
You would also need to source the deploy::default attributes after that JSON load to fill in any defaults.
Related
According to the beam harness documentation:
PROCESS: User code is executed by processes that are automatically started by the runner on each worker node.
args = [
"--runner=portableRunner",
"--streaming",
"--sdk_worker_parallelism=2",
"--environment_type=PROCESS",
"--environment_config={\"command\": \"/opt/apache/beam/boot\"}",
]
consumer_config = {
"security.protocol": "SASL_SSL",
"sasl.mechanism": "AWS_MSK_IAM",
"sasl.jaas.config": "software.amazon.msk.auth.iam.IAMLoginModule required;",
"sasl.client.callback.handler.class": "software.amazon.msk.auth.iam.IAMClientCallbackHandler",
"bootstrap.servers": bootstrap_servers,
}
with beam.Pipeline(options=PipelineOptions(args)) as p:
data = p | "Reading messages from Kafka" >> ReadFromKafka(
consumer_config=consumer_config,
topics=topics,
with_metadata=True
)
data | 'Writing to stdout' >> beam.Map(logging.info)
But when I run the code (deployed to k8s using flinkk8soperator), it is complaining:
Caused by: java.io.IOException: Cannot run program "docker": error=2, No such file or directory
Wondering if I misunderstand anything? Thanks!
After couple digging, I finally make the cross language work without using DinD or DooD. Here's the steps:
Ensure both job and task manager mount a shared volume for artifact staging. (This is required, otherwise the task manager will complained unable to find the submitted jar)
Ensure your docker image can run both java and python beam code, here's what I did:
# python SDK
COPY --from=apache/beam_python3.7_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam/
# java SDK
COPY --from=apache/beam_java8_sdk:2.41.0 /opt/apache/beam/ /opt/apache/beam_java/
In the job, you'll need to start the expansion service with extra args, for example the KafkaIo:
from apache_beam.io.kafka import ReadFromKafka, default_io_expansion_service
ReadFromKafka(
consumer_config=consumer_config,
topics=[topic],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam_java/boot\"}",
]
)
You portable execution relies on xLang support that requires starting a Java SDK with docker. Your cluster doesn't have docker installed.
I am trying to execute chef solo on my windows vm locally using Powershell. (I am trying to execute cookbook during provisioning customise Windows VM on cloud)
All cookbook recipe dependencies are available within the cookbook.
Create solo.rb for recipe execution & web.json to run recipe.
"type": "powershell",
"inline": "chef-client --chef-license=accept -z -o cookbooks\\cookbook_workstation"
Error:
WARN: Failed to read the private key C:\chef\client.pem: #<Errno::ENOENT: No such file or directory # rb_sysopen - C:\chef\client.pem>
WARN: Error while reporting run start to Data Collector. URL: https://localhost:443/data-collector Exception: No HTTP Code -- I cannot read C:\chef\client.pem, which you told me to use to sign requests!
FATAL: NoMethodError: undefined method logger' for nil:NilClass <internal:warning>:43:in warn': warning: Chef::Compliance::Runner#logger at C:/opscode/chef/embedded/lib/ruby/2.7.0/forwardable.rb:154 forwarding to private method NilClass#logger (StructuredWarnings::BuiltInWarning)
Using chef-solo so I don't need chef server - how can I overcome client.pem WARN and logger issue. Pointers would be very helpful
I am trying to bringup my fabric network.
I got my orderers organization started.
I got my peer organizations started.
I got my cli started.
after that request is failing with
OCI runtime exec failed:
exec failed: container_linux.go:348 : starting container process caused "no such file or directory": unknown
The error means that either working_dir is undefined, or it does not exist.
Czeck the cli section in your docker-compose file for the above setting.
If you are working on Windows OS, a possible cause is the file encoding (should be in Unix format).
You could open this page:
https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
And search "No such file or directory". There is some related trouble shooting.
Just a short description:
Ensure that the file in question is encoded in the Unix format. This was most likely caused by not setting core.autocrlf to false in your Git configuration. There are several ways of fixing this. If you have access to the vim editor for instance, open the file:
vim ./path/to/the/related-file
Then change its format by executing the following vim command:
:set ff=unix
I try run command from another user remotely from rundeck jobs.
Rundeck provide sudo login\passwod mechanism for escalate privilege.
On my remote server in sudoers file NOPASSWD option. How i can configure rundeck jobs in this case?
my node configurations:
<project>
<node name="testServer"
type="Node"
description="testNode"
hostname="IP_ADDRESS"
username="${option.Login}"
ssh-authentication="password"
sudo-command-enabled="true"
ssh-password-option="option.Password"
/>
</project>
Example command in job for become another user:
sudo /bin/su suuser -
Error's after trying executions:
Remote command failed with exit status -1
08:57:13 Sudo execution password response failed: Failed waiting for input prompt: Expected input was not seen in 5000 milliseconds
08:57:13 Failed: NonZeroResultCode: Remote command failed with exit status -1
I added the following three attributes to my node and it appears to work:
sudo-command-enabled="true"
sudo-prompt-pattern=""
sudo-command-pattern="^sudo.*"
You have to set the below attributes in Edit Project Configuration File
project.ssh-authentication=password
project.ssh-keypath=/home/rundeck/.ssh/id_rsa
project.ssh-password-storage-path=keys/xyz.password
project.sudo-command-enabled=true
project.sudo-password-option=option.jobPassword
project.sudo-password-storage-path=keys/xyz.password
project.sudo-prompt-pattern=^Password\:.*
The following is the syntax I use to start my HSQL database before running junit tests.
java -cp ./hsqldb.jar org.hsqldb.server.Server --database.0 file:mydb --dbname.0 xdb
What is the syntax to stop this database from the command line?
Thank you. I made progress but now get this error.
I get this error when attempting to shutdown.
Failed to get a connection to 'jdbc:hsqldb:file:C:\My Projects\Libraries\junit\m
ydb;shutdown=true' as user "SA".
Cause: Database lock acquisition failure: lockFile: org.hsqldb.persist.LockFile#
74715985[file =C:\My Projects\Libraries\junit\mydb.lck, exists=true, locked=fals
e, valid=false, ] method: checkHeartbeat read: 2014-01-31 19:06:52 heartbeat - r
ead: -9919 ms.
My START command.
java -cp ./hsqldb.jar org.hsqldb.server.Server --database.0 file:mydb --dbname.0 xdb
My sqltool.rc
# A personal, local, persistent database
urlid xdb
url jdbc:hsqldb:file:C:\My Projects\Libraries\junit\mydb;shutdown=true
username SA
password
My STOP command.
java -jar sqltool.jar --sql 'SHUTDOWN;' xdb
I resolved the issue, need to use localhost in my sqltool.rc file.
My START command:
java -cp ./hsqldb.jar org.hsqldb.server.Server --database.0 file:mydb --dbname.0 xdb
My sqltool.rc:
urlid xdb
url jdbc:hsqldb:hsql://localhost/xdb;shutdown=true
username SA
password
My STOP command:
java -jar sqltool.jar --sql "SHUTDOWN;" xdb
You can use SQLTool which is a command line utility supplied as a jar with HSQLDB. There is an example for Unix, but you can use a similar command in other operating systems.
http://hsqldb.org/doc/guide/unix-chapt.html#uxc_shutdown
Also see the Utilities Guide for more information:
http://www.hsqldb.org/doc/2.0/util-guide/index.html