Re: YCCSA Grid setup



Re: YCCSA Grid setup

From: Jan Staunton <jps_at_cs.york.ac.uk>
Date: Mon, 28 Feb 2011 17:10:16 +0000
Message-Id: <DCC73D57-CE96-4F38-93E7-47FAE9C11CDD@cs.york.ac.uk>
oh yeah, the white rose grid... :)

Looks like a ball ache compared to SGE tho... And who uses GUIs these days? :)

On 28 Feb 2011, at 17:07, David R White wrote:

> 
> 
> On 28/02/11 17:03, Jan Staunton wrote:
>> Adding a few YCCSA people to the admin side doesn't seem problematic to me.
>> 
>> Also, projects and departments can be defined within the SGE that divvy resources according to whatever policy we may implement.
>> 
>> I didn't realise that their machines ran diskless... that is quite hardcore.  Not so trivial then.
>> 
>> I was wondering, has there ever been a call to setup a uni wide grid computing environment at all?  A good number of departments would benefit from a large cluster of machines.  Birmingham Uni have a similar facility that at least all researchers can access to perform large scale computation.  Having access to such a resource really broadens horizons with respect to the scale of experimentation that can be done.
>> 
> 
> I think it's called the WRG!
> 
> David
> 
>> Cheers
>> 
>> Jan
>> 
>> On 28 Feb 2011, at 16:51, James Carter wrote:
>> 
>>> On 28/02/11 16:38, Jan Staunton wrote:
>>>> 
>>>> Looks as if YCCSA use SGE as well, meaning a merging of the two resources would be trivial.
>>> 
>>> they do, yes.
>>> 
>>>> They have 96 cores with low memory, and a 16-core machine with tonnes of memory.
>>> 
>>> 128 GB - it's a similar machine to our imola.
>>> 
>>> the main issue with their nodes is lack of memory. they are 4 core machines with 4 gig of memory. this is compounded by the fact the the nodes run disc-less so about 1G of the RAM is used to store the OS. the theory is that, if/when they merge with our system, they would be fitted with discs to store the OS (our linux) and for swap space.
>>> 
>>>> 
>>>> They have a bunch of strange restrictions on jobs, such as memory limits etc so they are prolly better versed in SGE config than we are :)
>>>> 
>>> 
>>> i expect the primary driver for this is the lack of memory on their nodes. from the technical side of things i can see how it's all going to work. what i don't know is how you're going to deal with the config and partitioning of resources when you have different groups of people using the cluster. at the moment we have a small group of grid administrators and there hasn't been any conflict. would this work post-merger with some biology people added to the list of administrators or do we need a different system?
>>> 
>>> --
>>> James Carter, Senior Experimental Officer
>>> 
>> 
>> 
> 
> -- 
> Dr David R. White
> Research Associate
> Dept. of Computer Science
> University of York,
> Deramore Lane, YO10 5GH.
> http://www.cs.york.ac.uk/~drw
Received on Mon 28 Feb 2011 - 17:10:21 GMT