Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 5
|
![]() |
Author |
|
Shinobi Gaiden
Advanced Cruncher Joined: Sep 27, 2005 Post Count: 92 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
if there are over 200 more batches to study and we are still folding protiens (and we assume we will "run them all") is there any sort of specific order in which we will deal with them? (did we stick with the first 80 beacause they seemed more inter-related)
is it just an update the systems for new info kind of thing? is their any clear number other than 200+ (MAX we know of) that we could be couting down from? ;) How is work coming along on the data we ran? (over simplify please) "were on task" vs: "things have been slow due to the holidays" Happy new year |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Hello Shinobi Gaiden,
??? Work units are organized into batches on the backend. Batches are sent to the WCG from the project institute, and results are organized into matching batches and sent back. It is a simple way of keeping track of huge amounts of data. Which project are you asking about? mycrofth |
||
|
Shinobi Gaiden
Advanced Cruncher Joined: Sep 27, 2005 Post Count: 92 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
i guess i misunderstood. I thought all the HPF work done so far was batch 1 but in that post i forgot the word used to describe them (we did 80 of 200+ ____ ) and batch 2 would be the upcoming data. (to which i was asking why dont we just run all 200+ _______ .
oganization is probably the answer to my question. |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
HPF has been running batches of about a thousand proteins per batch. We have several more batches of proteins waiting at the ISB to run, and we have processed more than a hundred thousand proteins (counting those processed by grid.org).
We intend to run a different project, currently referred to as HPF2, which will use the slower, more computationally intensive, high resolution folding method of Rosetta on some carefully selected proteins that we have already folded in HPF using the fast low resolution method. Right now, Rosetta@home is trying to improve the algorithm that we shall be using in HPF2. The greater accuracy of high resolution folding will let us use the results in ways fundamentally different than we can with the less accurate results from the low resolution method used in HPF. I am quoting in that previous sentence. No, I don't know what the fundamentally different ways are. I hope that will be explained when the HPF2 project starts (whatever it is named). mycrofth |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
He was probably thinking of 'genomes'. Meaning we set out to do 80, there are 200+ genomes in total, and we have continued beyond the 80 limit.
|
||
|
|
![]() |