How to create a cloud variable in Scratch - mit-scratch

All over the internet I see examples/tutorials in which they create cloud variables. But when I create a variable (I've used scratch 2 and 3) I get
But I would like to get is this:
I just watch a youtube tutorial in which was told that for cloud to work you have to be a scratcher for at least 3 month (I have an account for 4 months now). So what do I have to do to get the cloud checkbox?

There are two levels (for normal users) of scratchers: a New Scratcher, and a Scratcher.
When you create an account, you get the status of a New Scratcher. This is what your status is now:
Then, to become a Scratcher, there are certain secret criteria you have to meet. It's some sort of combination of having like three projects shared, having commented five times, and following a few users, if I recall correctly. Then you'll get the Scratcher status, and it'll look like this (using me as an example):
Basically - you have to participate more on the site to lose the status of New Scratcher. Here are the requirements for becoming a full-on Scratcher.
Once you meet those criteria, head to the "data" section of blocks, press "create a new variable", and then tick the "cloud variable" checkbox. Note that unlike regular variables, cloud variables can only contain numbers up to 256 characters long per variable with a maximum of ten variables per project ID (trailing and leading zeroes are allowed), not letters or non-numeric characters apart from 0123456789.eE-+.

You have to become a "Scratcher", not a "New Scratcher".

You just have to sign in.It was a mistake that I also realised when I started working with cloud variables. Even if you have an account and it still does that, just verify your email, it will work.

Related

Use "hours" for estimations in VSTS

I was looking on VSTS, but I didn't found how to estimate our Tasks/users story in hours instead of Story point.
Is this possible?
I know the pro(and cons) of story points, buf for now our team want to move progressively with agile, and we don't want to start to estimate in story points.
Thank you!
Edit as requested, I currently use the agile template(but open to change)
Declare "One story point is equal to one hour" and use the existing field as-is.
TL;DR
Use the built-in scrum template
If you use the built-in Scrum template Tasks have Remaining Work and PBI's have Effort. Nothing says Remaing Work = hours or Effort = story points.
If you want to estimate your tasks in number of 4 hour work blocks it will take to complete you can do that, if you want to do it in hours you can do that. Same goes for effort you can put any number in there you want as long as you make sure everyone in your team understands what 1 or 5 or 10 means.
So if possible switch to the scrum template, your question is exactly the reason why these fields have a more generic name than Story Points or Remaining Hours in the scrum template. Added bonus is that your team can switch definition if they feel like some other number or unit suits your estimation process better.
This blog post makes a good comparison between the built-in different templates:
https://nkdagility.com/choosing-a-process-template-for-your-team-project/
If you are an administrator in VSTS select the VSTS button at the top left of the screen. Then select then select the cog.
This will take you to a page where you can edit a number of settings. Select Process.
In the process window choose the process your project uses. You can see which one your project is using by the numbers on the right hand side of the process. Once you’ve found your process select it by clicking.
Then choose the work item type you wish to change. So in this instance user story.
Once in the task you want to change select add group and name the group want to add and select it's placement on the card.
When the group has been added select it and choose the ellipses (...). Then select Add Field. Customise the field to be either a new one of your choosing or choose a predefined one.
Once you have added this step repeat the process on this page to customise and style the task how you like. Once done navigate back to your project in VSTS and the changes will be applied

Powershell AD tool

Basically, I've created a 400+ and growing AD tool with a GUI. I've given them the option to search using many filters with get-aduser and wild cards fill the rest. This company is large so when I search for a common name like kyle or john, it takes along time to pull the information because the search was not specific.
Is there a way to stop them from doing such a general search or limit the number of entries can be in an array, or stop the search if there's too much information.
Edit: I have a solution using a variable to count wild cards but that only works if the form is completely blank. If i set the number any different they won't be able to look someone up by ID number

How Can use real-time workflow in CRM 2015?

I have a real-time workflow for creating unique numbers. This workflow get a numeric field from my custom entity, increase it by 1, and update it for next use.
I want to run this workflow on multiple records.
Running on-demand mode, it works fine,and I have true and unique numbers, but for "Record is Created" mode, it dose not work fine and get repeated numbers.
What I have to do?
This approach wont work, when the workflow runs on demand its running multi-threaded, e.g. two users create two records, two instances of the workflow start. As there is no locking mechanism you end up with duplicated numbers.
I'm guessing this isn't happening when running on demand because you are running as a single user.
You will need to implement a custom auto number approach, such as Auto Number for DynamicsCRM.
Disclaimer: I work for Gap Consulting who produce the tool linked above.

Billing by tag in Google Compute Engine

Google Compute Engine allows for a daily export of a project's itemized bill to a storage bucket (.csv or .json). In the daily file I can see X-number of seconds of N1-Highmem-8 VM usage. Is there a mechanism for further identifying costs, such as per tag or instance group, when a project has many of the same resource type deployed for different functional operations?
As an example, Qty:10 N1-Highmem-8 VM's are deployed to a region in a project. In the daily bill they just display as X-seconds of N1-Highmem-8.
Functionally:
2 VM's might run a database 24x7
3 VM's might run batch analytics operation averaging 2-5 hrs each night
5 VM's might perform a batch operation which runs in sporadic 10 minute intervals through the day
final operation writes data to a specific GS Buckets, other operations read/write to different buckets.
How might costs be broken out across these four operations each day?
The Usage Logs do not provide 'per-tag' granularity at this time and it can be a little tricky to work with the usage logs but here is what I recommend.
To further break down the usage logs and get better information out of em, I'd recommend trying to work like this:
Your usage logs provide the following fields:
Report Date
MeasurementId
Quantity
Unit
Resource URI
ResourceId
Location
If you look at the MeasurementID, you can choose to filter by the type of image you want to verify. For example VmimageN1Standard_1 is used to represent an n1-standard-1 machine type.
You can then use the MeasurementID in combination with the Resource URI to find out what your usage is on a more granular (per instance) scale. For example, the Resource URI for my test machine would be:
https://www.googleapis.com/compute/v1/projects/MY_PROJECT/zones/ZONE/instances/boyan-test-instance
*Note: I've replaced the "MY_PROJECT" and "ZONE" here, so that's that would be specific to your output along with the name of the instance.
If you look at the end of the URI, you can clearly see which instance that is for. You could then use this to look for a specific instance you're checking.
If you are better skilled with Excel or other spreadsheet/analysis software, you may be able to do even better as this is just an idea on how you could use the logs. At that point it becomes somewhat a question of creativity. I am sure you could find good ways to work with the data you gain from an export.
9/2017 update.
It is now possible to add user defined labels, then track usage and billing by these labels for Compute and GCS.
Additionally, by enabling the billing export to Big Query, it is then possible to create custom views or hit Big Query in a tool more friendly to finance people such as Google Docs, Data Studio, or anything which can connect to Big Query. Here is a great example of labels across multiple projects to split costs into something friendlier to organizations, in this case a Data Studio report.

Lotus Notes application Document count and disk space

Using Lotus Notes 8.5.2 & made a backup of my mail application in order to preserve everything in a specific folder before deleting the contents of it from my main application. The backup is a local copy, created by going to File --> Application --> New Copy. Set the Server to Local, give it a title & file name that I am saving in a folder. All of this works okay.
Once I have that, I go into the All Documents & delete everything out except the contents of the folder(s) I want this application to preserve. When finished, I can select all and see approximately 800 documents.
However, there are a couple other things I have noticed also. First - the Document Count (Right-click on the newly created application & go to Properties). Select the "i" tab, and it has a Disk Space & Document count there. However, that document count doesn't match what is shown when you open the application & go to All Documents. That count is matches the 800 I had after deleting all but the contents I wanted to preserve. Instead, the application says it has almost double that amount (1500+), with a fairly large file size.
I know about the Unread Document count, and in this particular application I checked the "Don't maintain unread marks" on the last property tab. There is no red number in the application, but the document count nor the file size changed when that was selected. Compacting the application makes no difference.
I'm concerned that although I've trimmed down what I want to preserve on this Lotus Notes application that there's a lot of excess baggage with it. Also, since the document count appears to be inflated, I suspect that the file size is also.
How do you make a backup copy of a Lotus Notes application, then keep only what you want & have the Document Count and File Size reflect what you have actually preserved? Would appreciate any help or advice.
Thanks!
This question might really belong on ServerFault or SuperUser, because it's more of an admin or user question than a development question, but I can give you an answer from a developer angle...
Open your mailbox in Domino Designer, and look at the selection formula for $All view. It should look something like this:
SELECT #IsNotMember("A"; ExcludeFromView) & IsMailStationery != 1 & Form != "Group" & Form != "Person"
That should tell you first of all that indeed, "All Documents" doesn't really mean all documents. If you take a closer look, you'll see that three types of documents are not included in All Documents.
Stationery documents
Person and Group documents (i.e., synchronized contacts)
Any other docs that any IBM, 3rd party, or local developer has decided to mark with an "A" in the ExcludeFromView field. (I think that repeat calendar appointment info probably falls into this category.)
One or more of those things is accounting for the difference in your document count.
If you want, you can create a view with the inverse of that selection formula by reversing each comparison and changing the Ands to Ors:
SELECT #IsMember("A"; ExcludeFromView) | (IsMailStationery = 1) | ( Form = "Group" | Form = "Person")
Or for that matter, you can get the same result just taking the original formula and surrounding it with parens and prefixing it with a logical not.
Either way, that view should show you everything that's not in AllDocuments, and you can delete anything there that you don't want.
For a procedure that doesn't involve mucking around with Domino Designer, I would suggest making a local replica instead of a local copy, and using the selective replication option to replicate only documents from specific folders (Space Savers under More Options). But that answer belongs on ServerFault or SuperUser so if you have any questions about it please enter a new question there.