I'm writing a Framer.js function to simulate the 'splash' effect when you tap a button or a layer, as per Google Material Design.
It looks something like this
tapSplash = (tapX,tapY) ->
tapSplashLayer = new layer
backgroundColor: "#ffffff"
opacity: 0.2
width: 500, height: 1500
borderRadius: 1500
midX: tapX
midY: tapY
After this, I have some code to run the animation.
My question is, how do I get the tapX and tapY coordinates? It is not good enough to use the midpoint of the layer that has been clicked/tapped - I want the animation to originate from the exact point the user tapped on
Check out your own answer to the question.
I've since then forked it and made changes so that taps on a computer or taps on a mobile device are recognized separately.
https://github.com/carignanboy1/Material-Design-Interactions/tree/improved-touch
touchEvent = Events.touchEvent(event)
if Utils.isPhone() || Utils.isTablet()
tX = touchEvent.clientX - layer.x
tY = touchEvent.clientY - layer.y
else
tX = touchEvent.offsetX
tY = touchEvent.offsetY
Related
I have a Nucleo F401RE and want to use my display with it. I have those two I2C functions:
void sh1106_sendCMD(uint8_t cmd)
{
uint8_t reg_cmd[2];
reg_cmd[0] = 0x00;
reg_cmd[1] = cmd;
HAL_I2C_Master_Transmit(&hi2c1, OLED_I2C_ADDRESS, reg_cmd, 2, HAL_MAX_DELAY);
}
void sh1106_sendData(uint8_t data)
{
uint8_t reg_data[2];
reg_data[0] = 0x40; // write data
reg_data[1] = data;
HAL_I2C_Master_Transmit(&hi2c1, OLED_I2C_ADDRESS, reg_data, 2, HAL_MAX_DELAY);
}
Then I have an init function to initialize the Display
void sh1106_init(void)
{
// Initialize the display
sh1106_sendCMD(0xAE); // Set display off
sh1106_sendCMD(0xD5); // Set display clock divide ratio/oscillator frequency
sh1106_sendCMD(0x80); // Set display clock divide ratio/oscillator frequency
sh1106_sendCMD(0xA8); // Set multiplex ratio
sh1106_sendCMD(0x3F); // 1/64 duty
sh1106_sendCMD(0xD3); // Set display offset
sh1106_sendCMD(0x00); // No offset
sh1106_sendCMD(0x40); // Set start line address
sh1106_sendCMD(0xA1); // Set segment re-map
sh1106_sendCMD(0xC8); // Set COM output scan direction
sh1106_sendCMD(0xDA); // Set COM pins hardware configuration
sh1106_sendCMD(0x12);
sh1106_sendCMD(0x81); // Set contrast control register
sh1106_sendCMD(0xFF); // Maximum contrast
sh1106_sendCMD(0xA4); // Set display to RAM
sh1106_sendCMD(0xA6); // Set normal display
sh1106_sendCMD(0xD9); // Set pre-charge period
sh1106_sendCMD(0xF1);
sh1106_sendCMD(0xDB); // Set VCOMH deselect level
sh1106_sendCMD(0x40);
sh1106_sendCMD(0x8D); // Set charge pump enable/disable
sh1106_sendCMD(0x14); // Enable charge pump
sh1106_sendCMD(0xAF); // Set display on
}
When I then try to set the pixel (x=0, y=0), nothing happens, but when I set the pixel (x=2, y=0) the pixel (x=0, y=0) turns on. Somehow I have a horizontal offset of -2.
I set the pixel like this:
sh1106_sendCMD(0xB0 | 0); // Set the page address
sh1106_sendCMD(0x02); // Set the lower column address
sh1106_sendCMD(0x10); // Set the higher column address
sh1106_sendData(0x01);
You probably have a 128 by 64 pixel display. The SH1106 however supports up to 132 by 64 pixel. So there are 4 unused pixel columns.
The easiest solution is to always add 2 to all x-coordinates.
If you feel more adventurous, you can configure the SH1106 accordingly. Given the limited information about your display, I can only guess. I could be:
sh1106_sendCMD(0x42);
replacing:
sh1106_sendCMD(0x40);
See the SH1106 datasheet for more information.
There is a script that for some reason does not work.
I explain the situation: There is a mini game in which there are green rings and they change their position (not random) every time the white ring stops using the space bar on them.
I wrote a little script to find the green color, but for some reason it does not work.
F5::
loop {
PixelGetColor, color1, 957, 672, RGB
PixelGetColor, color2, 957, 672, RGB
if(color1 != 0x10A04B){
Sleep, 80
} else {
Soundbeep
Sleep, 80
}
if(color2 != 0xFDFFFE){
Sleep, 80
} else {
Send, {space}
}
}
return
To help you fully understand the mini game, I'm sending you a link to the video: https://youtu.be/b4y1aiQNea4
Please help me understand the implementation. Thank you!
Use a timer instead of the loop
Try using Alt or Slow mode
For finding the green ring coordinates try using PixelSearch, then you will be able to specify variation of the color (it's necessary due to it being partially transparent)
colorchange(x,y){
PixelGetColor,color,x,y
if (color=0xffffffffff){
Return 1}Else{
Return 0}
}
if colorchange(X,Y){
Send,{Space}
}
I've trained darkflow on my data set and have good result! I can feed it a pre recorded image or video and it draws the bounding boxes around the right things, win!
Now I'd like to run it live as has been done with camera feeds, except I'd like my feed to be from the screen, not the camera. I have a specific window, which is launched from a specific process, or I can just take a section of the screen (from coords) either is fine for my application.
Currently I use PILs image grab and then feed the images into darkflow, but this feels quite slow (maybe a few frames per second) nothing like the 30 ish fps you can get with video files!
I get more than 25 fps with Python MSS on my slow laptop under Ubuntu.
Here is an example:
from mss import mss
from PIL import Image
import time
def capture_screenshot():
with mss() as sct:
monitor = sct.monitors[1]
sct_img = sct.grab(monitor)
# Convert to PIL/Pillow Image
return Image.frombytes('RGB', sct_img.size, sct_img.bgra, 'raw', 'BGRX')
N = 100
t = time.time()
for _ in range(N):
capture_screenshot()
print ("Frame rate = %.2f fps" % (N/(time.time()-t)))
Output:
Frame rate = 27.55 fps
I got over 40 fps with this script (on i5-7500 3.4GHz, GTX 1060, 48GB RAM).
There are a lot of APIs used to capture the screen. Among them, mss runs much faster and is not difficult to use. Here is an implementation of mss with darkflow(YOLOv2), in which 'mon' defines the area you want apply prediction on the screen.
options is passed to the darkflow, that specifies which config file and checkpoint we want to use, threshold for detection, and how much this process occupies the GPU. Before we run this script, we have to have at least one trained model (or Tensorflow checkpoint). Here, load is the checkpoint number.
If you think that the network detects too many bounding boxes, I recommend you to lower the threshold.
import numpy as np
import cv2
import glob
from moviepy.editor import VideoFileClip
from mss import mss
from PIL import Image
from darkflow.net.build import TFNet
import time
options = {
'model' : 'cfg/tiny-yolo-voc-1c.cfg' ,
'load' : 5500,
'threshold' : 0.1,
'gpu' : 0.7 }
tfnet = TFNet( options )
color = (0, 255, 0) # bounding box color.
# This defines the area on the screen.
mon = {'top' : 10, 'left' : 10, 'width' : 1000, 'height' : 800}
sct = mss()
previous_time = 0
while True :
sct.get_pixels(mon)
frame = Image.frombytes( 'RGB', (sct.width, sct.height), sct.image )
frame = np.array(frame)
# image = image[ ::2, ::2, : ] # can be used to downgrade the input
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = tfnet.return_predict( frame )
for result in results :
tl = ( result['topleft']['x'], result['topleft']['y'] )
br = ( result['bottomright']['x'], result['bottomright']['y'] )
label = result['label']
confidence = result['confidence']
text = '{} : {:.0f}%'.format( label, confidence * 100 )
frame = cv2.rectangle( frame, tl, br, color, 5 )
frame = cv2.putText( frame, text, tl, cv2.FONT_HERSHEY_COMPLEX, 1, (0, 0, 0), 2 )
cv2.imshow ( 'frame', frame )
if cv2.waitKey ( 1 ) & 0xff == ord( 'q' ) :
cv2.destroyAllWindows()
txt1 = 'fps: %.1f' % ( 1./( time.time() - previous_time ))
previous_time = time.time()
print txt1
My Problem:
I want to use functions of opencv like the MIL-Tracker or MedianFlow-Tracker in Matlab (these functions are not in mexopencv). But I don't know how or understand how to do this. The documentation of opencv/mexopencv doesn't help me. This doesn't help: how do OpenCV shared libraries in matlab? - because the link in the answer is down.
So is there a way to use these functions in Matlab? And if- How?
Why?: As a part of my bachelor thesis I have to compare different already implemented ways to track people.
If you would like to use these functions specifically in MATLAB you could always write your own MEX file in C/C++ and send the data back/forward between the two calls, however this would require some basic C++ knowledge and understanding creating MEX files.
Personally I would definately recommend trying this with Python and the OpenCV Python interface since its so widely used and more supported than using the calls in MATLAB (plus its always a useful skill to be able to switch between Python and MATLAB as and when needed).
There is a full example with the MIL-Tracker and the MedianFlow-Tracker (and others) here (Which demonstrates using them in C++ and Python!).
Python Example :
import cv2
import sys
(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')
if __name__ == '__main__' :
# Set up tracker.
# Instead of MIL, you can also use
tracker_types = ['BOOSTING', 'MIL','KCF', 'TLD', 'MEDIANFLOW', 'GOTURN']
tracker_type = tracker_types[2]
if int(minor_ver) < 3:
tracker = cv2.Tracker_create(tracker_type)
else:
if tracker_type == 'BOOSTING':
tracker = cv2.TrackerBoosting_create()
if tracker_type == 'MIL':
tracker = cv2.TrackerMIL_create()
if tracker_type == 'KCF':
tracker = cv2.TrackerKCF_create()
if tracker_type == 'TLD':
tracker = cv2.TrackerTLD_create()
if tracker_type == 'MEDIANFLOW':
tracker = cv2.TrackerMedianFlow_create()
if tracker_type == 'GOTURN':
tracker = cv2.TrackerGOTURN_create()
# Read video
video = cv2.VideoCapture("videos/chaplin.mp4")
# Exit if video not opened.
if not video.isOpened():
print "Could not open video"
sys.exit()
# Read first frame.
ok, frame = video.read()
if not ok:
print 'Cannot read video file'
sys.exit()
# Define an initial bounding box
bbox = (287, 23, 86, 320)
# Uncomment the line below to select a different bounding box
bbox = cv2.selectROI(frame, False)
# Initialize tracker with first frame and bounding box
ok = tracker.init(frame, bbox)
while True:
# Read a new frame
ok, frame = video.read()
if not ok:
break
# Start timer
timer = cv2.getTickCount()
# Update tracker
ok, bbox = tracker.update(frame)
# Calculate Frames per second (FPS)
fps = cv2.getTickFrequency() / (cv2.getTickCount() - timer);
# Draw bounding box
if ok:
# Tracking success
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
cv2.rectangle(frame, p1, p2, (255,0,0), 2, 1)
else :
# Tracking failure
cv2.putText(frame, "Tracking failure detected", (100,80), cv2.FONT_HERSHEY_SIMPLEX, 0.75,(0,0,255),2)
# Display tracker type on frame
cv2.putText(frame, tracker_type + " Tracker", (100,20), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50),2);
# Display FPS on frame
cv2.putText(frame, "FPS : " + str(int(fps)), (100,50), cv2.FONT_HERSHEY_SIMPLEX, 0.75, (50,170,50), 2);
# Display result
cv2.imshow("Tracking", frame)
# Exit if ESC pressed
k = cv2.waitKey(1) & 0xff
if k == 27 : break
I would definately try it using Python (if this is an option). Otherwise if MATLAB is a must then probably try implementing the C++ example code shown in the link before as a MEX file and linking openCV during the compilation i.e.
mex trackerMexOpenCV.cpp 'true filepath location to openCV lib'
I hope this helps!
I wrote script that automatically detects play button on the screen and clicks it.
Here it is:
SetTimer Go, 1000
CoordMode Pixel, Screen
CoordMode Mouse, Screen
^!r:: Reload
F4::T4 := !T4
F5::play()
Go:
If (!T4){
return
}
play()
return
play(){
FoundX := 0
FoundY := 0
ImageSearch, FoundX, FoundY, 0, 0, A_ScreenWidth, A_ScreenHeight, play2.png
If (ErrorLevel = 2){
MsgBox Could not conduct the search.
}
Else{
If (ErrorLevel = 1){
return
}
Else{
x := FoundX + 40
y := FoundY + 40
MouseClick, left, x, y
}
}
}
While in regular window it works fine in fullscreen (fullscreened borderless window) it's behaving weird.
For example sometimes when it sees the button it clicks but then keeps clicking even thou its not on the screen anymore. What's more even if i reload the script it would still keep clicking in that spot. After pressing play a fast-forward button which is fairly similar. Does ImageSearch have some tolerance settings?
The other sorcery is that when i change focus to another window (which is on top but the play button is still visible) it clicks, which change the focus back, and will not click anymore even after the button is back. However if use ALT+TAB to go back to that other window it triggers.
Can anyone explain to me wth is going on here?