Re: YCCSA Grid setup



Re: YCCSA Grid setup

From: James Carter <james_at_cs.york.ac.uk>
Date: Mon, 28 Feb 2011 16:51:06 +0000
Message-ID: <4D6BD27A.1010302@cs.york.ac.uk>
On 28/02/11 16:38, Jan Staunton wrote:
>
> Looks as if YCCSA use SGE as well, meaning a merging of the two resources would be trivial.

they do, yes.

> They have 96 cores with low memory, and a 16-core machine with tonnes of memory.

128 GB - it's a similar machine to our imola.

the main issue with their nodes is lack of memory. they are 4 core 
machines with 4 gig of memory. this is compounded by the fact the the 
nodes run disc-less so about 1G of the RAM is used to store the OS. the 
theory is that, if/when they merge with our system, they would be fitted 
with discs to store the OS (our linux) and for swap space.

>
> They have a bunch of strange restrictions on jobs, such as memory limits etc so they are prolly better versed in SGE config than we are :)
>

i expect the primary driver for this is the lack of memory on their 
nodes. from the technical side of things i can see how it's all going to 
work. what i don't know is how you're going to deal with the config and 
partitioning of resources when you have different groups of people using 
the cluster. at the moment we have a small group of grid administrators 
and there hasn't been any conflict. would this work post-merger with 
some biology people added to the list of administrators or do we need a 
different system?

-- 
James Carter, Senior Experimental Officer
Received on Mon 28 Feb 2011 - 16:51:07 GMT