Please excuse me if I use incorrect terms or concepts. Seems I am in the mist of a crash course on MS Project, Project Server, and the PSI...
Project Professional provides the Resource Usage view that lists a given Resource, the Tasks they have been assigned to, and the amount of scheduled Work for a given day.
Is this information available in Project Server and how would I read it using the PSI?
Thanks.
Jason
If you're just getting started with PSI, I'd strongly recommend downloading and using the ProjTool app that is part of the Project 2007 SDK.
I haven't done too much work with Resources, but after taking a quick look.. here is how I'd approach it:
Reference the Project.asmx service (ex: http://servername/pwa/_vti_bin/psi/Project.asmx)
Use the ReadProjectEntities method to retrieve a DataSet and pass it a ProjectEntityType of Task, Assignment and Resource.
Define some entity types:
public const int ENT_TYPE_TASK = 2;
public const int ENT_TYPE_RESOURCE = 4;
public const int ENT_TYPE_ASSIGNMENT = 8;
Then you can read the data:
int entity = ENT_TYPE_TASK | ENT_TYPE_ASSIGNMENT | ENT_TYPE_RESOURCE;
ProjectDataSet dataSet = project.ReadProjectEntities(projectUid, entity, DataStoreEnum.PublishedStore);
// do stuff with these tables...
//dataSet.Task
//dataSet.Assignment
//dataSet.ProjectResource
ReadProjectEntities is nice because you can read only the part of the project you need... if you need more than the Task table then you can use a logical OR to get additional ProjectEntityTypes.
As for the assigned work, it looks like that is also in the Assignment table, but I think you'll have to do some calculating.
Related
I am looking to open up multiple connections using a netty client bootstrap in order to parse messages coming from multiple sources. The messages all have the same format, however, due to the amount of data that needs to be processed, I must run each connection on separate threads (This is assuming netty creates a thread per client channel, which I couldn't find a reference for - if that's not the case, how would this be achieved?).
This is the code that I use to connect to the data server:
var b = new Bootstrap()
.group(group)
.channel(classOf[NioSocketChannel])
.handler(RawFeedChannelInitializer)
var ch1 = b.clone().connect(host, port).sync().channel();
var ch2 = b.clone().connect(host, port).sync().channel();
The initializer calls RawPacketDecoder, which extends ReplayingDecoder, and is defined here.
The code works well without #Sharable when opening a single connection, but for the purpose of my application I must connect to the same server multiple times.
This results in the runtime error #Sharable annotation is not allowed pointing to my RawPacketDecoder class.
I am not entirely sure on how to get past this issue, short of reimplementing in scala an instantiable class of ReplayingDecoder as my decoder based directly on ByteToMessageDecoder.
Any help would be greatly appreciated.
Note: I am using netty 4.0.32 Final
I found the solution in this StockExchange answer.
My issue was that I was using an object based ChannelInitializer (singleton), and ReplayingDecoder as well as ByteToMessageDecoder are not sharable.
My initializer was created as a scala object, and therefore a single instance allowed. Changing the initializer to a scala class and instantiating for each bootstrap clone solved the problem. I modified the bootstrap code above as follows:
var b = new Bootstrap()
.group(group)
.channel(classOf[NioSocketChannel])
//.handler(RawFeedChannelInitializer)
var ch1 = b.clone().handler(new RawFeedChannelInitializer()).connect(host, port).sync().channel();
var ch2 = b.clone().handler(new RawFeedChannelInitializer()).connect(host, port).sync().channel();
I am not sure whether this ensures multithreading as wanted but it does allow to split the data access into multiple connections to the feed server.
Edit Update: After performing additional research on the subject, I have determined that netty does in fact create a thread per channel; this was verified by printing to console after the creation of each channel:
println("No. of active threads: " + Thread.activeCount());
The output shows an incremental number as channels are created and associated with their respective threads.
By default NioEventLoopGroup uses 2*Num_CPU_cores threads as defined here:
DEFAULT_EVENT_LOOP_THREADS = Math.max(1, SystemPropertyUtil.getInt(
"io.netty.eventLoopThreads",
Runtime.getRuntime().availableProcessors() * 2));
This value can be overriden to something else by setting
val group = new NioEventLoopGroup(16)
and then using the group to create/setup the bootstrap.
I want to create a very simple query to look up a sqlite db using greendao. 2 fields, one is the ID and the other 'affirmation'.
i am sorry to be such a beginner, but i am not sure how to use greendao including what to import etc.. All i have been able to do so far is add the greendao libraries but i cant find a good tutorial to just do a query. Basically i want it to be a random ID that calls up a random affirmation and return it to my main activity.. Once again i am sorry but i am really trying and getting nowhere..
Greendao is a ORM-framework. If you don't know what this means you should look up this first.
Greendao generally works as follows:
You create a java-project that generates your sourcecode for your real app. You have to include DaoCore and DaoGenerator in this project.
You add the generated sourcecode to your android-project and include DaoCore in it. DaoGemerator is not neccessary.
For examples how to generate the code and define your entities the greendao-website is a good place to go.
According to your description you need an entity with id-property and a string-property (affirmation).
In your android-project you then use the DevOpenHelper to get a session and from the session you can get the dao (Data Access Object) for your entity. The dao includes the very basic query to load data by id (load ()).
Please notice that the DevOpenHelper is only meant for development process. For your final release you should extend OpenHelper and costumize your actions to be taken on DB-schema update.
Here is some example code I have in my application.
DaoHelper.getInstance().getDaoSession().clear();
OperationDao dao = DaoHelper.getInstance().getDaoSession().getOperationDao();
String userId = "some id"
WhereCondition wc1 = new WhereCondition.PropertyCondition(OperationDao.Properties.UserId,
" = " + userId);
WhereCondition wc2 = new WhereCondition.PropertyCondition(OperationDao.Properties.Priority,
" > " + 4);
// Uncached is important if your data may have changed recently.
List<Operation> answer = dao.queryBuilder().where(wc1, wc2).listLazyUncached();
This is a decent tutorial on how to learn greendao. Make sure you follow the links to the further parts.
You can use:
daoMaster = new DaoMaster(db);
daoSession = daoMaster.newSession();
yourDao = daoSession.getYourDao();
Random() random = new Random();
List<YourObject> objects = yourDao.loadAll();
YourObject yourObject = objects.get(random.nextInt(objects.size());
I'm trying to better understand common strategies regarding results and errors in Celery.
I see that results have statuses/states and stores results if requested -- when would I use this data? Should error handling and data storage be contained within the task?
Here is a sample scenario, in case it helps better understand my objective:
I have a geocoding task that goeocodes user addresses. If the task fails or succeeds, I'd like to update a field in the database letting the user know. (Error handling) On success I'd like the geocoded data to be inserted into the database (Data storage)
What approach should take?
Let me preface this by saying that I'm still getting a feel for Celery myself. That being said, I have some general inclinations about how I'd go about tackling this, and since no one else has responded, I'll give it a shot.
Based on what you've written, a relatively simple (though I suspect non-optimized) solution is to follow the broad contours of the blog comment spam task example from the documentation.
app.models.py
class Address(models.Model):
GEOCODE_STATUS_CHOICES = (
('pr', 'pre-check'),
('su', 'success'),
('fl', 'failed'),
)
address = models.TextField()
...
geocode = models.TextField()
geocode_status = models.CharField(max_length=2,
choices=GEOCODE_STATUS_CHOICES,
default='pr')
class AppUser(models.Model):
name = models.CharField(max_length=100)
...
address = models.ForeignKey(Address)
app.tasks.py
from celery import task
from app.models import Address, AppUser
from some_module import geocode_function #assuming this returns a string
#task()
def get_geocode(appuser_pk):
user = AppUser.objects.get(pk=appuser_pk)
address = user.address
try:
result = geocode_function(address.address)
address.geocode = result
address.geocode_status = 'su' #set address object as successful
address.save()
return address.geocode #this is optional -- your task doesn't have to return anything
on the other hand, you could also choose to decouple the geo-
code function from the database update for the object instance.
Also, if you're thinking about chaining tasks together, you
might think about if it's advantageous to pass a parameter as
an input or partial input into the child task.
except Exception as e:
address.geocode_status = 'fl' #address object fails
address.save()
#do something_else()
raise #re-raise the error, in case you want to trigger retries, etc
app.views.py
from app.tasks import *
from app.models import *
from django.shortcuts import get_object_or_404
def geocode_for_address(request, app_user_pk):
app_user = get_object_or_404(AppUser, pk=app_user_pk)
...etc.etc. --- **somewhere calling your tasks with appropriate args/kwargs
I believe this meets the minimal requirements you've outlined above. I've intentionally left the view undeveloped since I don't have a sense of how exactly you want to trigger it. It sounds like you also may want some sort of user notification when their address can't be geocoded ("I'd like to update a field in a database letting a user know"). Without knowing more about the specifics of this requirement, I would it sounds like something that might be best accomplished in your html templates (if instance.attribute value is X, display q in template) or by using a django.signals (set up a signal for when a user.address.geocode_status switches to failure -- say, by emailing the user to let them know, etc.).
In the comments to the code above, I mentioned the possibility of decoupling and chaining the component parts of the get_geocode task above. You could also think about decoupling the exception handling from the get_geocode task, by writing a custom error handler task, and using the link_error parameter (for instance., add.apply_async((2, 2), link_error=error_handler.s(), where error_handler has been defined as a task in app.tasks.py ). Also, whether you choose to handle errors via the main task (get_geocode) or via a linked error handler, I would think that you would want to get much more specific about how to handle different sorts of errors (e.g., do something with connection errors different than with address data being incorrectly formatted).
I suspect there are better approaches, and I'm just beginning to understand how inventive you can get by chaining tasks, using groups and chords, etc. Hope this helps at least get you thinking about some of the possibilities. I'll leave it to others to recommend best practices.
In my project, there are additional (non-wicket) applications, which need to know the URL representation of some domain objects (e.g. in order to write a link like http://mydomain.com/user/someUserName/ into a notification email).
Now I'd like to create a spring bean in my wicket module, exposing the URLs I need without having a running wicket context, in order to make the other application depend on the wicket module, e.g. offering a method public String getUrlForUser(User u) returning "/user/someUserName/".
I've been stalking around the web and through the wicket source for a complete workday now, and did not find a way to retrieve the URL for a given PageClass and PageParameters without a current RequestCycle.
Any ideas how I could achieve this? Actually, all the information I need is somehow stored by my WebApplication, in which I define mount points and page classes.
Update: Because the code below caused problems under certain circumstances (in our case, being executed subsequently by a quarz scheduled job), I dived a bit deeper and finally found a more light-weight solution.
Pros:
No need to construct and run an instance of the WebApplication
No need to mock a ServletContext
Works completely independent of web application container
Contra (or not, depends on how you look at it):
Need to extract the actual mounting from your WebApplication class and encapsulate it in another class, which can then be used by standalone processes. You can no longer use WebApplication's convenient mountPage() method then, but you can easily build your own convenience implementation, just have a look at the wicket sources.
(Personally, I have never been happy with all the mount configuration making up 95% of my WebApplication class, so it felt good to finally extract it somewhere else.)
I cannot post the actual code, but having a look at this piece of code will give you an idea how you should mount your pages and how to get hold of the URL afterwards:
CompoundRequestMapper rm = new CompoundRequestMapper();
// mounting the pages
rm.add(new MountedMapper("mypage",MyPage.class));
// ... mount other pages ...
// create URL from page class and parameters
Class<? extends IRequestablePage> pageClass = MyPage.class;
PageParameters pp = new PageParameters();
pp.add("param1","value1");
IRequestHandler handler = new BookmarkablePageRequestHandler(new PageProvider(MyPage.class, pp));
Url url = rm.mapHandler(handler);
Original solution below:
After deep-diving into the intestines of the wicket sources, I was able to glue together this piece of code
IRequestMapper rm = MyWebApplication.get().getRootRequestMapper();
IRequestHandler handler = new BookmarkablePageRequestHandler(new PageProvider(pageClass, parameters));
Url url = rm.mapHandler(handler);
It works without a current RequestCycle, but still needs to have MyWebApplication running.
However, from Wicket's internal test classes, I have put the following together to construct a dummy instance of MyWebApplication:
MyWebApplication dummy = new MyWebApplication();
dummy.setName("test-app");
dummy.setServletContext(new MockServletContext(dummy, ""));
ThreadContext.setApplication(dummy);
dummy.initApplication();
I am using EF4 Self Tracking Entities (VS2010 Beta 2 CTP 2 plus new T4 generator). But when I try to update entity information it does not update to database as expected.
I setup 2 service calls. one for GetResource(int id) which return a resource object. the second call is SaveResource(Resource res); here is the code.
public Resource GetResource(int id)
{
using (var dc = new MyEntities())
{
return dc.Resources.Where(d => d.ResourceId == id).SingleOrDefault();
}
}
public void SaveResource(Resource res)
{
using (var dc = new MyEntities())
{
dc.Resources.ApplyChanges(res);
dc.SaveChanges();
// Nothing save to database.
}
}
//Windows Console Client Calls
var res = service.GetResource(1);
res.Description = "New Change"; // Not updating...
service.SaveResource(res);
// does not change anything.
It seems to me that ChangeTracker.State is always show as "Unchanged".
anything wrong in this code?
This is probably a long shot... but:
I assume your Service is actually in another Tier? If you are testing in the same tier you will have problems.
Self Tracking Entities (STEs) don't record changes until when they are connected to an ObjectContext, the idea is that if they are connected to a ObjectContext it can record changes for them and there is no point doing the same work twice.
STEs start tracking once they are deserialized on the client using WCF, i.e. once they are materialized to a tier without an ObjectContext.
If you look through the generated code you should be able to see how to turn tracking on manually too.
Hope this helps
Alex
You have to share assembly with STEs between client and service - that is the main point. Then when adding service reference make sure that "Reuse types in referenced assemblies" is checked.
The reason for this is that STEs contain logic which cannot be transfered by "Add service reference", so you have to share these types to have tracing logic on client as well.
After reading the following tip from Daniel Simmons, the STE starts tracking. Here is the link for the full article. http://msdn.microsoft.com/en-us/magazine/ee335715.aspx
Make certain to reuse the Self-Tracking Entity template’s generated entity code on your client. If you use proxy code generated by Add Service Reference in Visual Studio or some other tool, things look right for the most part, but you will discover that the entities don’t actually keep track of their changes on the client.
so in the client make sure you don't use add service reference to get the proxy instead access service through following code.
var svc = new ChannelFactory<IMyService>("BasicHttpBinding_IMyService").CreateChannel();
var res = svc.GetResource(1);
If you are using STEs without WCF you may have to call StartTracking() manually.
I had the same exact problem and found the solution.
It appears that for the self-tracking entities to automatically start tracking, you need to reference your STE project before adding the service reference.
This way Visual Studio generates some .datasource files which does the final trick.
I found the solution here:
http://blogs.u2u.be/diederik/post/2010/05/18/Self-Tracking-Entities-with-Validation-and-Tracking-State-Change-Notification.aspx
As for starting the tracking manually, it seems that you do not have these methods on the client-side.
Hope it helps...