I'm just kinda curious if there is a way to use the mocker driver alongside or instead of the docker one? How does fn even decide which one to use if there is more than one? The reason would be that if it's possible I might try to implement another real driver for another container engine.
So far I managed mocker to show up as a driver but still haven't found out how to get fn to use that instead of docker.
There is an example for building an fn with extensions located here: https://github.com/fnproject/fn/blob/master/examples/extensions/main.go#L16 -- for building with a custom driver, at the moment it requires using that same process (i.e. there's no way to configure another driver at runtime from fn core's binary without extending it).
In order to build with an alternative driver such as mocker, a user would use the agent.WithDockerDriver option to specify a driver when creating the agent, documented here https://godoc.org/github.com/fnproject/fn/api/agent#AgentOption and sample follows:
func main() {
mocker := mock.New()
// configure logstore, mq
da := agent.NewDirectCallDataAccess(logstore, mq)
magent := agent.New(da, agent.WithDockerDriver(mocker))
fns := server.New(server.WithAgent(magent), /*other options*/)
fns.Start(context.Background())
}
we need to tidy up the agent interface to make them easier to create (data access stuff is convoluted), but is not too bad. most of this can be stolen from this file https://github.com/fnproject/fn/blob/master/api/server/server.go -- we need to name it to WithDriver as well :)
assuming you're looking at using something like rkt or a more robust driver on the backend, it's possible to hook this up by implementing the driver interface and in the past we have tried it but we are not maintaining it at present since it was not a viable option (performance issues, perhaps improved since). would be cool to see if you manage to get rkt working, gladly take a PR for it and figure out where to put it :)
Related
We are in process to migrate our TB to UVM.
I am working on first IP that will be verified using UVM.
I have to find out if it is possible to reuse my uvm_sequences in SOC that remains in OVM mean time.
In case it is possible , like find example how it's done.
Thanks in advance.
You cannot mix OVM and UVM that way. You should be able to write your uvm_sequence in such a way that it work in both by simply changing your u's to o's. You would have to limit your sequence to functionality that exists in both.
If you use UVM RAL. there is a package that integrates that functionality back into OVM.
There is another package, ovm_container, that gives you the functionality of uvm_config_db.
I'm interested in using the qlot library from inside of a Lisp image to manage multiple local instances of quicklisp.
There doesn't seem to be any documentation on how to use it, except through a non-Lisp CLI interface, and the obvious
(qlot:with-local-quicklisp (#P"/a/path/here/") (qlot:install :skippy))
or
(qlot:with-local-quicklisp (#P"/a/path/here/") (qlot:quickload :skippy))
give me
Component "skippy" not found
[Condition of type ASDF/FIND-SYSTEM:MISSING-COMPONENT]
What I'm looking for is a way to install a particular library by name. Basically, exactly how one would use ql:quickload, but targeting a specific, local directory instead of ~/quicklisp. What am I doing wrong?
It looks like the intent is to modify dynamically scoped variables in a way that makes using ql:quickload directly possible.
So
(qlot:with-local-quicklisp (#P"/a/path/to/some/quicklisp/")
(qlot/util:with-package-functions :ql (quickload)
(quickload :skippy)))
will result in skippy being installed in the quicklisp instance at #P"/a/path/to/some/quicklisp/" instead of the default location.
This leaves me a bit perplexed as to what qlot:quickload is for; its describe output doesn't shed additional light.
I have a question related to V4L-DVB drivers. Following the
Building/Compiling the Latest V4L-DVB Source Code link, there are 3 ways to
compile. I am curious about the last approach (More "Manually
Intensive" Approach). It allows me to choose the components that I
wish to build and install using the "make menuconfig". Some of these components (i.e. "CONFIG_MEDIA_ATTACH") are used in pre-processor directives that define a function in one shape if defined, and a function in another if not defined (i.e.
dvb_attach, dvb_detach) in the resulting modules (i.e. dvb_core.ko)
that will be loaded by most of the DVB drivers. What happens if there are two
drivers (*.ko modules) on the same host machine, one that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH defined and another that needs dvb_core.ko with
CONFIG_MEDIA_ATTACH undefined, is there a clean way to handle this?
What is also not clear to me is: Since the V4L compilation environment seems very customizable (by setting the .config file), if I develop a driver using V4L-DVB structures, there is a big chance that it has conflicts with other drivers since each driver has its own custom settings. Is my understanding correct?
Thanks!
Dave
Update: TL;DR there seems to be no built-in way to achieve this, so a custom task is an easy solution.
Capistrano provides facilities to share files and directories over all releases. This is convenient and provides even some safety on files that should not be easily changed (or must remain the same across releases), e.g. a database configuration file.
But when it comes to replace or just update one of these shared files, I end up doing it manually, directly on the target machine. I would like to improve on that, for instance by asking Capistrano to overwrite some or all shared files when deploying. A kind of --force flag with some granularity.
I am not aware of any such kind of facility, and failing so far in my search. Any pointer?
Thinking about it
One of the reason why this facility does not exist (except that I did not find it!) is that it may be harder than it looks. For example, let's assume we have a shared database configuration file, and we exclude it from version control for security reason (common practice). Current release relies on version 1 of the DB configuration. The next release requires version 2 of the DB configuration. If the deployment goes well, everything's good. It gets harder when rolling back after some error with the new release (e.g. a regression), as version 1 must then be available.
Such automation would be cool and convenient, but dangerous as well. Yet I have practical use cases at hand.
I created a template method to do this. For example, I could have a task like this:
task :create_database_yml do
on roles(:app, :db) do
within(shared_path) do
template "local/path/to/database.yml.erb",
"config/database.yml",
:mode => "600"
end
end
end
And then I have a database.yml.erb template that uses things like fetch(:database_password) to fill in appropriate values. You can use the ask method in Capistrano to prompt for these values so they are never committed.
The implementation of template can be very simple: you just need to read the file, pass it through ERB, and then use Capistrano's upload! to place the results on the server.
My version is a little more complicated than yours probably needs to be, but in case you are curious:
https://github.com/mattbrictson/capistrano-mb/blob/7600440ecd3331945d03e059368b75849857f1fb/lib/capistrano/mb/dsl.rb#L104
One approach is to use a system configuration tool like Chef or Puppet to deploy the configuration files distinctly from Capistrano.
Another approach is to create a custom task to do this: https://coderwall.com/p/wgs6gw/copy-local-files-to-remote-server-using-capistrano-3
I personally don't change on-server configs often enough or on enough servers yet to have tried to automate it. Crafting an scp command which copies the desired config file to all of the required servers has sufficed in the past.
I need to disable nagle algorithm in python2.6.
I found out that patching HTTPConnection in httplib.py that way
def connect(self):
"""Connect to the host and port specified in __init__."""
self.sock = socket.create_connection((self.host,self.port),
self.timeout)
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True) # added line
does the trick.
Obviously, I would like to avoid patching system lib if possible. So, the question is: what is right way to do such thing? (I'm pretty new to python and can be easily missing some obvious solution here)
Please note that if using the socket library directly, the following is sufficient:
self.socket.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, True)
I append this information to the accepted answer because it satisfies the information need that brought me here.
It's not possible to change the socket options that httplib specifies, and it's not possible to pass in your own socket object either. In my opinion this sort of lack of flexibility is the biggest weakness of most of the Python HTTP libraries. For example, prior to Python 2.6 it wasn't even possible to specify a timeout for the connection (except by using socket.setdefaulttimeout() globally, which wasn't very clean).
If you don't mind external dependencies, it looks like httplib2 already has TCP_NODELAY specified.
You could monkey-patch the library. Because python is a dynamic language and more or less everything is done as a namespace lookup at runtime, you can simply replace the appropriate method on the relevant class:
:::python
import httplib
def patch_httplib():
orig_connect = httplib.HTTPConnection.connect
def my_connect(self):
orig_connect(self)
self.sock.setsockopt(...)
However, this is extremely error-prone as it means that your code becomes quite specific to a particular Python version, as these library functions and classes do change. For example, in 2.7 there's a _tunnel() method called which uses the socket, so you'd want to hook in the middle of the connect() method - monkey-patching makes that extremely tricky.
In short, I don't think there's an easy answer, I'm afraid.