Nested For Loop using trinket.io - nested-for-loop

I started doing this exercise in trinket.io about drawing a snowflake and I don't really understand the order neither very much the steps in the loops (specially where the inner loop finishes). I would very much appreciate the help. Thank you!
enter image description here
#!/bin/python3
import turtle
#starts the software that allows you to draw
import random
# starts the software that provides randomness
orange = turtle.Turtle()
#naming my turtle by associating it with a variable
turtle.Screen().bgcolor("black")
#sets the screen colour to black
colours = ["cyan","purple","white","blue"]
orange.color(random.choice(colours))
#using the random function the program will choose from the list of colours
orange.penup() # moves pen without writing
orange.forward(90) # moves 90 units forward
orange.left(45) #does an angle of 45 anticlockwise (going left)
orange.pendown() #starts drawing
def branch():
for i in range(3):
for i in range(3):
orange.forward(30)
orange.backward(30)
orange.right(45)
orange.left(90)
orange.backward(30)
orange.left(45)
orange.right(90)
orange.forward(90)
for i in range(8):
branch()
orange.left(45)

Related

How do I make a Maze Generator on Scratch?

I am currently in High School, and I am in an APCSP (AP Computer Science Principles) class, which in my case is learning in Scratch programming. I am confused and have practically no idea what I'm doing. Scratch is very confusing and I feel like it's pointless to learn.
My question is this: Can anyone help me on how to make a Maze Generator on Scratch, as this is my project and it's giving me struggles.
Thank you.
It's actually possible to build with scratch but depends on what you are looking for. I assume you want to generate a simple maze like in old fashioned 8-bit games like boulder dash.
First decide on the size of your maze: for example 5 x 5 blocks.
If you want to create a maze, imagine drawing it on a grid on paper. Blocks are either "empty" or filled in. Our maze can be represented by numbers. The empty blocks are represented by a 0 and the filled blocks with a 1.
You could visualize that matrix like this if all blocks are empty:
0,0,0,0,0,
0,0,0,0,0,
0,0,0,0,0,
0,0,0,0,0,
0,0,0,0,0
Adding a border wall while keeping the inside empty would look like:
1,1,1,1,1,
1,0,0,0,1,
1,0,0,0,1,
1,0,0,0,1,
1,1,1,1,1
Using a "list" variable to store this information would fit best within the possibilities of MIT Scratch.
In this case, you need to understand that each block in our maze is represented by a position in above matrix. You could draw numbers on a piece of paper in the shape and size of your grid / matrix as a reference to remember the position of each block if that makes it easier.
We also need to look at how our maze will relate to the Stage size. The width and height in pixels of a default scratch project is 480x360.
A 5 x 5 maze is divided in blocks of 480 / 5 = 96 width and 360 / 5 = 72 height. In other words, a block needs to be 96x72 pixels, based on a full screen maze.
Next step, is creating a sprite representing the visualization of the blocks of the maze. I would keep the first "costume" of our block sprite empty, and create a fully filled block to represent the walls of the maze.
After that, we need to programmatically create our maze. I made an example you can explore of random drawing of the blocks of a maze:
https://scratch.mit.edu/projects/278731659/
(You can change the rows & columns value to see it scale up, but remember the limit to the amount of clones the block sprite can have is 300)
This is just to get you started and by no means a complete solution. I just hope this helps you think in the right direction.
You can make this more advanced, by adding a function to explore and correct our randomly drawn grid to generate a walkable path from position x to position y. A rule you can program is for example: Every empty position in the grid should have at least two other empty positions in the spaces above, below, left and right of it.
There are many different ways to do this; whether this is with sprites and stamp or 2D lists and pen. Either way, the main component is the algorithm. This wikipedia page gives details on how maze generation works and few different algorithms. There is also a video series by The Coding Train here where he creates a maze generator with the 2D list method from above (this method is a bit harder on scratch, however). Either way, the best thing to do is to look at examples others have made, figure out how they work, and try to recreate them or make them better. Here's a good place to get started.
Scratch IS truly pointless! A simple maze generator would have you use the pen to draw predefined shapes (Such as a long hallway or intersection). You should also make (invisible) squares to separate everything and have the program draw in the squares.
I will put a link later that leads to a sample project that has the code.
Check out this video by griffpatch
https://www.youtube.com/watch?v=22Dpi5e9uz8
This was one of my projects, and the instructor provided this video for everyone to follow and expand from.

MATLAB - Filling in the empty region of an ellipse/skull shape?

I have been attempting to fill in a binary image in Matlab so that I am left with the entirety of this oval-like image like this.
However, I have been running into an issue in actually being able to define the red region. I have tried the following:
Using the bwconvhull function to fill the shape accurately, but then I do not know how to get rid of the inner shape to isolate just the red region.
I have also attempted to trace the boundary of the binary region but to no avail. I am not entirely sure what to do after tracing the boundary. I have attempted to trace just the inner boundary, but the bwtraceboundary function simply follows the entirety of the borders (on the inside and outside of the skull).
Are there any similar functions to bwconvhull where I am able to expand a region from the center outward? My major difficulties have been in isolating either (a) the inner boundary of the skull or (b) the inner "black" region where the brain should be. My coding attempts can be found below:
Issue (a) - Tracing boundaries
hole=imread('Copy CT.jpg');
BW=im2bw(hole,.9);
dim=size(BW);
col=round(dim(2)/2);
row=min(find(BW(:,col)));
boundary = bwtraceboundary(BW,[row,col],'S');
x=boundary(:,2);
y=boundary(:,1);
Issue (b) - Isolating only the center
hole=imread('Copy CT.jpg');
BW=im2bw(hole,.9);
CH=bwconvhull(BW);
KH=CH-BW;
KH2=bwareaopen(KH,200);
Are there any particular functions that would be worth trying, or would there be another way to isolate the center of the circle so I can only highlight the red region? Any insight would be greatly appreciated!
I would approach this with these steps:
apply an edge detection filter so you end up with two ellipse-ish shaped parts: an inner and outer ellipse.
apply an algorithmic ellipse-fit to the inner ellipse. There are some good examples out there, but I don't have one on me.
subtract the bwconvhull boundary with the inner ellipse.
subtract all parts of your new oval that overlap with the white portions of the original image.
I am sorry I don't have actual code to back up this approach, but this will get you pretty close. You may need more steps to clean up the final result.

SpriteKit - Getting The Weight Of A SKSpriteNode

I have an app with large boxes falling on top of the small red box. I would like to know when the small red block reaches a certain weight (X blocks are resting on top of it). I couldn't find a weight property for the red block. Any suggestions?
EDIT: Just to clarify. The boxes falling from the top will be random sizes, and falling from random positions. So there isn't really a way to keeping track of what landed on top of the red block. I need some way to measure the downward force being applied to the red block
You can calculate the weight for each node the following way and then add them together.
redBox.PhysicsBody?.mass

Region of Interest in nighttime vehicle detection

I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.
However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:
If anyone has any idea, plz give me some suggestions.
I would turn the problem around and say that we are looking for headlights
ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,
Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.
The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.
This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.
It will NOT work with images taking in daylight.
I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.
import matplotlib.pylab as pl
import cv2
# Load the image
im = cv2.imread('headlights_at_night2.jpg')
# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Smooth the image heavily to mask out any local peaks or valleys
We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area
grey_image = cv2.blur(grey_image, (15,15))
Sum the intensities row-by-row
intensity_profile = []
for r in range(0, grey_image.shape[0]):
intensity_profile.append(pl.sum(grey_image[r,:]))
Smooth the profile and convert it to a numpy array for easy handling of the data
window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')
Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.
We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.
max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon = max_value_location
The blue curve in the right-most figure represents the variable profile
The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.
I display the result below.
I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.
You may try downsampling the image.

Matlab color detection

I'm trying to consistently detect a certain color between images of the same scene. The idea is to recognize a set of object based on a color profile. So, for instance, if I'm given a scene with a green ball in it and I select that green as part of my color palette, I would like a function which has a matrix reflecting that it detects the ball.
Can anyone recommend some matlab functions/plugins/starting points for this project? ideally the function for color recognition will take an array of color values and will match them within a certain threshold.
Kinda like this:
http://www.mathworks.com/matlabcentral/fileexchange/18440-color-detection-using-hsv-color-space-training-and-testing
except it works (this one didn't)
Update:
Here's why I chose not to use the above toolkit..
I start by selecting some colors of interest in the picture
and then ask the function to recognize the road in later images...
And absolutely nothing useful is triggered. So yeah, apart from the few bugs that I came across in the code on download and fixed, this was kind of the kicker. I didn't try to fix the body of the code that recognizes the colors because.. well, I don't know how, which is why I came here.
So, let me just start off by saying road detection with color profiles is a pathological problem. But if the color of the roads are consistent, and the lighting doesn't change the color of the object you are trying to recognize then you might have a shot. (this will be extremely difficult if this is taken outside, or with different cameras, or if shadows happen, or it taking place in any sort of real-world environment)
Here are a few things that might help.
Try smoothing the image beforehand, the reason you get the bad results in the first images is probably because of small pixel variations in the road. If you can blur them, or use some sort of watershed or local averaging, you might get regions with more consistent color.
You might also consider using the LAB color space instead of HSV or RGB.
Using edge detection (see matlab's canny edge detector) might be able to get you some boundary information. If you are looking for a smooth object, there will not be very many edges in it.
Edit: I tried to adhere to this advice in the most simplistic way. Here are the resulting code and a few samples.
im=rgb2gray(im) %for most basic color capturing.. using another color space is better practice
%imshow(im)
RoadMask=roipoly(im)%create mask
RoadMask=uint8(RoadMask);%cast to so you can elementwise multiply
im=im.*RoadMask;%apply mask
[x y]=size(im);
for i=1:x
for j=1:y
%disp('here')
if (im(i,j)<160 || im(i,j)>180) %select your values based on your targets range
im(i,j)=0; %replace everything outside of range with 0
%disp(im(x,y)) %if you'd like to count pixels, turn all values
end %within range to 1 and do a sum at end
end
end
First converted from RGB to grayscale
selected a region that generally matched the roads grayness
Notice parts of the road are not captured and the blocky edges. such as this -------------^
This implementation was quicky and dirty, but I wanted to put it up before I forgot. I'll try to update with code that implements smoothing, sampling, and the LAB color space.