I have a final project due soon and am having to create a building calculator GUI.
Inputs will be
number of floors
number of bathrooms, and
square footage
The output shall be the cost in materials to build the home.
One of my goals (which I was told by my TA was doable, then he seemed to not know much about after I went more in depth later) was to automatically update the cost of each material from a website such as Home Depot's each time the GUI is run. Building material costs change frequently so I wanted to assign each material its own cost value that is updated automatically from HD's website. Is this something that is possible?
I appreciate any input.
RockPrice = urlread('http://www.homedepot.com/p/SHEETROCK-UltraLight-1-2-in-x-4-ft-x-8-ft-Gypsum-Board-14113411708/202530243','Get',{'displayPrice','urlread'})
For getting a webpage's content (including all markup), urlread can be used. Parsing that string for the data you want ('scraping' it, as some like to call this process) might be non-trivial in MATLAB, though.
Easier to handle would be data from a dedicated API, and Home Depot seems to actually have a REST API that has their product details. All their public APIs seem to still be in private beta, though, so I don't know how successful one would be in requesting an API key.
Related
We are using Dynatrace in our organization for a long time. It is really a great tool for pointing out pain areas in case of performance and knowing what's happening in the system. However we found that reporting is not great. In our setup, data gets wiped-off in 20 days for non-PRD environment. It also looses all the details. To keep a track of underlying calls currently we need to take a screen shot of put the data manually in excel file. This helps us in comparing old results with latest development/ improvement.
We were wondering if there is any Dynatrace API available which can push the pure path information in JSON format. We checked Dynatrace API page. But there is none. Creating excel files manually is waste of time. There is no value addition. Has anyone else found any work around for this?
e.g. for the image we want to have JSON having list of underlying DB calls shown under controller, their start-end time, time consumed details, etc..
Please help
I am working on a website of 3,000+ pages that is updated on a daily basis. It's already built on an open source CMS. However, we cannot simply continue to apply hot fixes on a regular basis. We need to replace the entire system and I anticipate the need to replace the entire system on a 1-2 year basis. We don't have the staff to work on a replacement system while the other is being worked on, as it results in duplicate effort. We also cannot have a "code freeze" while we work on the new site.
So, this amounts to changing the tire while driving. Or fixing the wings while flying. Or all sorts of analogies.
This brings me to a concept called "continuous migration." I read this article here: https://www.acquia.com/blog/dont-wait-migrate-drupal-continuous-migration
The writer's suggestion is to use a CDN like Fastly. The idea is that a CDN allows you to switch between a legacy system and a new system on a URL basis. This idea, in theory, sounds like a great idea that would work. This article claims that you can do this with Varnish but Fastly makes the job easier. I don't work much with Varnish, so I can't really verify its claims.
I also don't know if this is a good idea or if there are better alternatives. I looked at Fastly's pricing scheme, and I simply cannot translate what it means to a specific price point. I don't understand these cryptic cloud-service pricing plans, they don't make sense to me. I don't know what kind of bandwidth the website uses. Another agency manages the website's servers.
Can someone help me understand whether or not using an online CDN would be better over using something like Varnish? Is there free or cheaper solutions? Can someone tell me what this amounts to, approximately, on a monthly or annual basis? Any other, better ways to roll out a new website on a phased basis for a large website?
Thanks!
I think I do not have the exact answers to your question but may be my answer helps a little bit.
I don't think that the CDN gives you an advantage. It is that you have more than one system.
Changes to the code
In professional environments I'm used to have three different CMS installations. The fist is the development system, usually on my PC. That system is used to develop the extensions, fix bugs and so on supported by unit-tests. The code is committed to a revision control system (like SVN, CVS or Git). A continuous integration system checks the commits to the RCS. When feature is implemented (or some bugs are fixed) a named tag will be created. Then this tagged version is installed on a test-system where developers, customers and users can test the implementation. After a successful test exactly this tagged version will be installed on the production system.
A first sight this looks time consuming. But it isn't because most of the steps can be automated. And the biggest advantage is that the customer can test the change on a test system. And it is very unlikely that an error occurs only on your production system. (A precondition is that your systems are build on a similar/equal environment. )
Changes to the content
If your code changes the way your content is processed it is an advantage when your
CMS has strong workflow support. Than you can easily add a step to your workflow
which desides if the content is old and has to be migrated for the current document.
This way you have a continuous migration of the content.
HTH
Varnish is a cache rather than a CDN. It intercepts page requests and delivers a cached version if one exists.
A CDN will serve up contents (images, JS, other resources etc) from an off-server location, typically in the cloud.
The cloud-based solutions pricing is often very cryptic as it's quite complicated technology.
I would be careful with continuous migration. I've done both methods in the past (continuous and full migrations) and I have to say, continuous is a pain. It means double the admin time for everything, and assumes your requirements are the same at all points in time.
Unfortunately, I would say you're better with a proper rebuilt on a 1-2 year basis than a continuous migration, but obviously you know best about that.
I would suggest you maybe also consider a hybrid approach? Build yourself an export tool to keep all of your content in a transferrable state like CSV/XML/JSON so you can just import into a new system when ready. This means you can incorporate new build requests when you need them in a new system (what's the point in a new system if it does exactly the same as the old one) and you get to keep all your content. Plus you don't need to build and maintain two CMS' all the time.
Our team captures user stories in TFS. I use the great tool TeamSpec to dump these into a word document for good 'ol fashioned easy reading.
Now, we are at the point where we need to produce a functional specification that describes the software that will be built to support those user stories.
Again, I'd probably like this functional spec in word ultimately - as this has to be a readable document that the customer can read and sign off on.
That said, I would really love to have a tool that helps me map user stories to functional requirements, possibly even generating the matrix (in both directions) for easy reference.
What tools are there that might help me? Googling is no help at all. :)
Thanks in advance!
Here is what we have decided to do.
There will be a High Level Design, and a Functional Specification document.
The customer will get both, but the HDD will be written up front, for sign off, agreement and limiting of scope.
The functional specification will also be signed off on, but incrementally - we will fill in a section as we decide what will be done, and get sign off.
The HDD will have User stories in it, each user story is a ticket in TFS, synched via teamspec.
The Func Spec will have tasks in it, which are user stories broken down into features/requirments. Teamspec again used to populate this stuff in the func spec.
So the only thing left is showing a matrix - I will use the ability of TFS to pop out a .xls file - if it shows the user story -> task breakdown, that gives me the matrix we need.
So that's it really.
It solves the issue of HDD, Func spec, mapping of requirements and items between the two, while still recording it in TFS for access via developers.
I've searched the web for this bit to no avail - I Hope some one can point me in the right direction. I'm happy to look things up, but its knowing where to start.
I am creating an iPhone app which takes content updates from a webserver and will also push feedback there. Whilst the content is obviously available via the app, I don't want the source address to be discovered and published my some unhelpful person so that it all becomes freely available.
I'm therefore looking at placing it in a mySQL database and possibly writing some PHP routines to provide access to my http(s) requests. That's all pretty new to me but I can probably do it. However, I'm not sure where to start with the security question. Something simple and straightforward would be great. Also, any guidance on whether to stick with the XML parser I currently have or to switch to JSON would be much appreciated.
The content consists of straightforward data but also html and images.
Doing exactly what you want (prevent users from 'unauthorized' apps to get access to this data') is rather difficult because at the end of the day, any access codes and/or URLs will be stored in your app for someone to dig up and exploit.
If you can, consider authenticating against the USER not the App. So that even if there is a 3rd party app created that can access this data from where ever you store it, you can still disable it on a per-user basis.
Like everything in the field of Information Security, you have to consider the cost-benefit. You need to weigh-up the value of your data vs. the cost of your security both in terms of actual development cost and the cost of protecting it as well as the cost of inconveniencing users to the point that you can't sell your data at all.
Good luck!
How does one build a directory of 'Spots' for users to check-in to in a native iPhone app? Or, does the developer borrow data from, let's say, Google Maps?
When you Use data obtained from another network or source, you take a risk that the data may change and or may not be accurate, The data may cease to exist, (more so with google, LOL, one minute they are there like gangbusters, the next they are like "Gone" no explanation no apologies, just missing in action, if your developing an application for a business its always best to use your own data sources.
That may be more expensive but its the only way you will have any kind of control over your application resources,.
You can go both ways, it depends on what you want to do and how you designed it to do it. You can have a prerecorded and static database of spots, or you can update it sometimes connecting to some server or you can do it all dynamically by loading each time data from the internet.
Which one to choose? first you shall design your app having in mind something like:
How many times will these datas change
How frequently will these changes happen
How much will it cost to do an update
and so on
Developing your own database of places is likely to be quite an undertaking (and your competitors have a big head start). Google is beginning to provide their Places API for "check-in" style applications, so you may be able to get in on their beta.