mitmproxy script seems not running? - mitmproxy

I am trying to run a simple mitmscript script by issuing ./mitmproxy --mode transparent -s pyscript.py.The proxy works fine and there's no error info in mitmproxy console,but it seems the script didn't even run,log.txt file is empty even though proxy successfully proxied client requests:
import mitmproxy.http
class Events:
def response(self, f: mitmproxy.http.HTTPFlow):
try:
with open("/home/me/mitmproxy/log.txt", "a+") as log:
log.write("rrr")
except:
with open("/home/me/mitmproxy/log.txt", "a+") as log:
log.write("sss")
def load(self, entry: mitmproxy.addonmanager.Loader):
with open("/home/me/mitmproxy/log.txt", "a+") as log:
log.write("xxx")

You have created an add-on class, but you forgot to create a new instance of the class and register it in mitmproxy.
To do so you have to add the following entry at the end of your script:
addons = [
Events()
]
See also sample Events script for mitmproxy: https://docs.mitmproxy.org/stable/addons-events/

Related

Program stalls on "DEBUG Starting new HTTPS connection (1): login.microsoftonline.com:443" only when executed via cron

I'm using the python-O365 library in order to access my O365 mailbox. The project requires me to execute the program in a docker container. If I start the program manually (as root), everything works fine, but if I try to start it via cron, it stalls on DEBUG Starting new HTTPS connection (1): login.microsoftonline.com:443, which I found out after activating logging.
The minimal code example that reproduces the error (with log):
import O365
from utils.credentials import get_credentials
import logging # We want to get additional information
logging.basicConfig(
filename='./easy_log_file.log',
filemode='a',
format='%(levelname)s %(message)s', # %(asctime)s %(pathname)s %(lineno)d
level=logging.DEBUG
)
filename = "o365_token.txt"
token_backend = O365.FileSystemTokenBackend(token_path = filename)
account = O365.Account(get_credentials(), token_backend=token_backend)
inbox = account.mailbox().inbox_folder()
messages = inbox.get_messages()
for message in messages:
logging.info(message)
logging.info("finished")
To start it via cron, I used the following command:
echo "15 21 * * * bash /workspace/daemon_start.sh >> /workspace/cronlogs/logs_daemon_mail.log" | crontab. If I start the program manually, the log continues like this:
DEBUG Starting new HTTPS connection (1): graph.microsoft.com:443
DEBUG https://graph.microsoft.com:443 "GET /v1.0/me/mailFolders/Inbox/messages?%24top=100&%24filter=isRead+eq+false HTTP/1.1" 200 None
DEBUG Received response (200) from URL https://graph.microsoft.com/v1.0/me/mailFolders/Inbox/messages?%24top=100&%24filter=isRead+eq+false
If the program is started via cron, sometimes the log continues like this:
DEBUG Incremented Retry for (url='/common/oauth2/v2.0/token'): Retry(total=2, connect=3, read=3, redirect=None, status=None)
WARNING Retrying (Retry(total=2, connect=3, read=3, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1131)'))': /common/oauth2/v2.0/token
In order to resolve the issue, I added my proxy by using account = Account(credentials, token_backend=token_backend, proxy_server="proxy.my_proxy.com"). It's strange, that I would have to add it, for the container is already configured to use this proxy. When I tried it with this setting, I encountered the same issue, only that the log when started with cron was continued always and much faster.
Since I think, that cron simply starts the program and does not meddle with the connections, it doesn't make sense to me, that I get different outcomes by starting it manually or with cron.

volttron.platform.vip.agent.core ERROR: Possible conflicting identity

I have been working towards building my agent development skills in Volttron. I am completely new to the platform and trying to understand how to create basic agents that publish and subscribe to the Volttron bus. I'm not alone in this venture and get help from a few other people with experience, but even they are stumped. We are using the same agent files we share through GitHub, but the agent works on their computers and not on mine.
The publishing agent reads from a CSV file that is in the same directory as the agent and is suppose to publish information from the file. I have been careful to map the file directory in my source code to match my setup. I receive the following messages when I start running my publishing agent using eclipse "mars" that's running on Linux Mint 18.1 Serena:
2017-02-02 14:27:22,290 volttron.platform.agent.utils DEBUG: missing file /home/edward/.volttron/keystores/f9d18589-d62b-42b7-bac8-3498a0c37220/keystore.json
2017-02-02 14:27:22,290 volttron.platform.agent.utils INFO: creating file /home/edward/.volttron/keystores/f9d18589-d62b-42b7-bac8-3498a0c37220/keystore.json
2017-02-02 14:27:22,292 volttron.platform.vip.agent.core DEBUG: address: ipc://#/home/edward/.volttron/run/vip.socket
2017-02-02 14:27:22,292 volttron.platform.vip.agent.core DEBUG: identity: None
2017-02-02 14:27:22,292 volttron.platform.vip.agent.core DEBUG: agent_uuid: None
2017-02-02 14:27:22,292 volttron.platform.vip.agent.core DEBUG: severkey: None
2017-02-02 14:27:32,324 volttron.platform.vip.agent.core ERROR: No response to hello message after 10 seconds.
2017-02-02 14:27:32,324 volttron.platform.vip.agent.core ERROR: A common reason for this is a conflicting VIP IDENTITY.
2017-02-02 14:27:32,324 volttron.platform.vip.agent.core ERROR: Shutting down agent.
2017-02-02 14:27:32,324 volttron.platform.vip.agent.core ERROR: Possible conflicting identity is: f9d18589-d62b-42b7-bac8-3498a0c37220
I have done the following:
Created missing file "/home/edward/.volttron/keystores/f9d18589-d62b-42b7-bac8-3498a0c37220/keystore.json". The only thing that happens when I run the agent again is it gives me the same DEBUG message but with a different file name.
I looked into the "volttron.platform.vip.agent.core" file and have no idea what to do in there. I dont want to create more problems for myself.
I have been using the "Volttron's Documentation" to try and trouble shoot, but I always get the same message when ever I try to run any agent. I have had success when testing the platform and running "make-listener" through the terminal, but that's all.
I have been searching the web for the last couple of days and seen similar issues, but when attempting to follow the advise posted to remedy the situation, I have no luck. Error: volttron.platform.web INFO: Web server not started
Reinstalled Volttron, Mint, and Eclipse on my VM a few times to overcome any compatibility issues...
The source code for the agent is as follows:
#testcodeisforpublishingandprinting
import logging
import sys
#import json
from volttron.platform.vip.agent import Agent, Core, PubSub, compat
#from volttron.platform.vip.agent import *
#from volttron.platform.vip.agent import compat
from volttron.platform.agent import utils
from volttron.platform.messaging import headers as headers_mod
from datetime import datetime
#import numpy as NP
#from numpy import linalg as LA
import csv
outdata=open("/home/edward/volttron/testagent/Agent/PredictionfileP.csv","rb")
Pdata=csv.DictReader(outdata)
Price=[]
for row in Pdata:
Price.append(float(row['Price'])*0.01)
#from volttron.platform.agent import BaseAgent, PublishMixin, periodic, matching, utils
#from volttron.platform.agent import BaseAgent, PublishMixin, periodic
utils.setup_logging()
_log = logging.getLogger(__name__)
class testagent1(Agent):
def __init__(self, config_path, **kwargs):
self.config = utils.load_config(config_path)
super(testagent1, self).__init__(**kwargs)
self.step=0
#print('TestAgent example agent start-up function')
#Core.receiver('onsetup')
def onsetup(self, sender, **kwargs):
self._agent_id = self.config['agentid']
#Core.receiver('onstart')
def onstart(self, sender, **kwargs):
pass
#Core.receiver('onstop')
def onstop(self, sender, **kwargs):
pass
#Core.receiver('onfinish')
def onfinish(self, sender, **kwargs):
pass
#Core.periodic(5)
def simulate(self):
self.step=self.step+1#timestep increase
print('Simulationrunning')
now = datetime.utcnow().isoformat(' ')#time now
headers = {
'AgentID': self._agent_id,
headers_mod.CONTENT_TYPE: headers_mod.CONTENT_TYPE.PLAIN_TEXT,
headers_mod.DATE: now,
}
print(self.step)
self.vip.pubsub.publish('pubsub', 'testcase1/Step', headers, self.step)
print('Simulationupdatingloopingindex')
def main(argv=sys.argv):
'''Main method called by the eggsecutable.'''
try:
utils.vip_main(testagent1)
except Exception as e:
_log.exception('unhandled exception')
if __name__ == '__main__':
# Entry point for script
sys.exit(main())
I installed my version of Volttron using the 3.5RC1 manual published Jan. 2017.
I am assuming that you are running this from eclipse and not through the installation process. The installation of the agent would specify an identity that would remain for the life of the agent.
The remaining answer is specific to running within the eclipse environment.
def main(argv=sys.argv):
'''Main method called by the eggsecutable.'''
try:
# This is where the change is.
utils.vip_main(testagent1, identity='Thisidentity')
except Exception as e:
_log.exception('unhandled exception')
You will have to authorize the agent to be able to connect to the message bus by adding the agent publickey through the auth mechanism. Or you can add the wildcard /.*/ to the credentials of an entry through volttron-ctl auth add.
Thank you for asking this question. We will are updating the documentation to highlight this.
You will need to do the following at the command line:
volttron-ctl auth add
domain []:
address []:
user_id []:
capabilities (delimit multiple entries with comma) []:
roles (delimit multiple entries with comma) []:
groups (delimit multiple entries with comma) []:
mechanism [CURVE]:
credentials []: /.*/
comments []:
enabled [True]:
added entry domain=None, address=None, mechanism='CURVE', credentials=u'/.*/', user_id='ff6fea8e-53bd-4506-8237-fbb718aca70d'

wsadmin script timing out when executing against DMGR via SOAP

I'm attempting to start and stop an application on a single JVM via the wsadmin console since the Web UI for IBM BPM PS Adv. doesn't allow for that kind of operation. So, I have the following script:
https://gist.github.com/predatorian3/b8661c949617727630152cbe04f78d7e
and when I run it against the DMGR from the Cell Host, I receive the following errors.
[wasadmin#server01 ~]$ cat /usr/local/bin/Run_wsadmin.sh
#!/bin/bash
#
#
#
/opt/IBM/WebSphere/AppServer/bin/wsadmin.sh -lang jython -user serviceAccount -password password $*
[wasadmin#cessoapscrt00 ~]$ time Run_wsadmin.sh -f /opt/IBM/wsadmin/wsadmin_Restart_Application.py WPS00 CRT00WPS01 redirectResource_war
WASX7209I: Connected to process "dmgr" on node CRTDMGR using SOAP connector; The type of process is: DeploymentManager
WASX7303I: The following options are passed to the scripting environment and are available as arguments that are stored in the argv variable: "[WPS00, CRT00WPS01, redirectResource_war]"
WASX7017E: Exception received while running file "/opt/IBM/wsadmin/wsadmin_Restart_Application.py"; exception information: com.ibm.websphere.management.exception.ConnectorException
org.apache.soap.SOAPException: [SOAPException: faultCode=SOAP-ENV:Client; msg=Read timed out; targetException=java.net.SocketTimeoutException: Read timed out]
real 3m21.275s
user 0m17.411s
sys 0m0.796s
So, I'm not specifying the connection types, and using the default, which is SOAP. However, upon reading about the other Connection Types, none of them seem any better, but I attribute that to IBM Documentation vagueness. Is there an option to increase the timeout wait periods, or turn it off, or is there a better connection type?
Also running this directly on the wsadmin console, it seems that it is hanging up on gathering the application manager string.
[wasadmin#server01 ~]$ Run_wsadmin.sh
WASX7209I: Connected to process "dmgr" on node CRTDMGR using SOAP connector; The type of process is: DeploymentManager WASX7031I: For help, enter: "print Help.help()"
wsadmin>appManager = AdminControl.queryNames('cell=CRTCELL,node=WPS00,type=ApplicatoinManager,process=CRT00WPS01,*')
WASX7015E: Exception running command: "appManager = AdminControl.queryNames('cell=CRTCELL,node=WPS00,type=ApplicationManager,process=CRT00WPS01,*')"; exception information:
com.ibm.websphere.management.exception.ConnectorException
org.apache.soap.SOAPException: [SOAPException: faultCode=SOAP-ENV:Client; msg=Read timed out; targetException=java.net.SocketTimeoutException: Read timed out]
wsadmin>
You can increase timeout value in {profile}/properties/soap.client.props
com.ibm.SOAP.requestTimeout=180
If you want to turn off timeout, modify com.ibm.SOAP.requestTimeout=0
Or if you want longer timeout you can modify the value 180 to something else.
Also about your query command, I noticed that you have a typo on the MBean type, you had type=ApplicatoinManager, it should be type=ApplicationManager
HERE YOU GO -- I had the same issue. I want to override the timeout prop temporarily. This worked like a champ. Make sure you follow below steps exactly.I did some mistakes and the prop did not passed, I figured out and it works.
Copy the soap.client.props file from /properties and give it a new name such as mysoap.client.props.
Edit mysoap.client.props and update the value of com.ibm.SOAP.requestTimeout as required
Create a new Java properties file soap_override.props and enter the following line:
com.ibm.SOAP.ConfigURL=file:/mysoap.client.props
Pass soap_override.props into wsadmin using the -p option: wsadmin -p soap_override.props...
REFERENCE:
https://www.ibm.com/developerworks/community/blogs/timdp/entry/avoiding_wsadmin_request_timeouts_the_neat_way32?lang=en

Docker Akka-Http application endpoint not reachable

I have a very basic Akka-http app that is basically not much more than a Hello-world setup - I have defined an endpoint and simply bound it to "localhost" and port "8080":
object Main extends App with Routes {
private implicit val system = ActorSystem()
protected implicit val executor: ExecutionContext = system.dispatcher
protected implicit val materializer: ActorMaterializer = ActorMaterializer()
protected val log: LoggingAdapter = Logging( system, getClass )
log.info( "starting server" )
Http().bindAndHandle( logRequestResult("log",Logging.InfoLevel)( allRoutes ), "localhost", 8080 )
log.info( "server started, awaiting requests.." )
}
(allRoutes is mixed in via Routes, but is just a dummy endpoint that serialises a simple case class to a JSON response)
If I start it up using sbt then the endpoints works fine (http://localhost:8080/colour/red for example).
I am now trying to package it into a Docker container to run it - I have read things like http://yeghishe.github.io/2015/06/24/running-akka-applications.html and have added the sbt-native-package plugin (http://www.scala-sbt.org/sbt-native-packager/formats/docker.html#customize).
Now I run sbt docker:publishLocal
And I can see that the docker image has been created:
REPOSITORY TAG IMAGE ID CREATED SIZE
sample-rest-api 0.0.1 3c6ee44985b4 9 hours ago 714.4 MB
If I now start my image, mapping the 8080 port as follows:
docker run -p 8080:8080 sample-rest-api:0.0.1
I see the log output I normally see on startup, so it looks like it has started ok, however, if I then attempt to access the same URL as previously I now get the response
"Problem loading the page: The connection was reset"
If I check docker ps I see that the image is running, and the ports mapped as expected:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
27848729a425 sample-rest-api:0.0.1 "bin/sample-rest-api" About a minute ago Up About a minute 0.0.0.0:8080->8080/tcp furious_heisenberg
I am running on Ubuntu 16.04 - anyone have any ideas?
Try changing 'localhost' to 0.0.0.0
In the http.bindAndHandle

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)