Index | Recent Threads | Unanswered Threads | Who's Active | Guidelines | Search |
![]() |
World Community Grid Forums
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
No member browsing this thread |
Thread Status: Active Total posts in this thread: 486
|
![]() |
Author |
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
mecole, I respect your position on this but don't think WCG is going to re-write the entire runtime badge system at this stage. Perhaps they would consider introducing this or an alternative (several suggested) for GPU's only but I don't see any sign of that. Arguing that the techs do will raise the same 'waste of tech time' argument again, not that I buy into that; project design is key. The same argument could be raised for a new badge, or additional badges, site work, or even the introduction of new projects and GPU usage itself!
On actual task runtime (not badge credit), it's probably a good idea to have a runtime cap; the longer a GPU task runs, the greater the chance it will fail. This tends to affect older cards more than new cards, and is significantly higher than CPU tasks. Ideally the server would send an appropriate amount of work, designed specifically (card by card) to keep the GPU busy - but Betas results will allow the techs to refine the task structure, length and so on. OpenCL/GL is being used so that it works on NVidia and AMD/ATI cards. Think it wont work on any of Intel's integrated GPUs. On NVidia cards it's generally less efficient than CUDA, but whether that means a reduction in GPU utilization, or just less optimized code, I don't know. If it's the former then there would be more chance of running multiple tasks. Having more RAM generally makes little difference for crunching; most GPU apps do not use all the GDDR. On high end cards it tends to be less than 50%. Using too much GDDR would increase failure possibilities, after all most GPU's are also used for your display. discussing questions of public interest |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
@skgiven
Speaking of GPU crunching , I know that you ran MilkyWay and GPUGrid and I wonder if you were using ATI or Nvidia , and if they were aircooled or watercooled ? I haven't done MW for a while but when I was , I was mostly running my GTX 285 and a little on a GTX 295. But The 295 dual GPU just got WAY too hot so I stopped using it. That is why I wonder if yours are aircooled. I have since then replaced the 285 with a GTX 560 Ti Classified 448 Core and I wonder if it should be OK on Air ? Also I wonder if anyone is using the 560 over at MW and how it is performing with the DP ? |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
my suggestion would be to cap the max runtime per user at 24 hours per day and have absolutley nothing to do with performance. Just for the record, I have some sympathy with your point of view but would make the following points:
|
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Think that's what he meant to say, but 1 Card, 1 Day max, as a machine can have 2,3,4,6,8 GPUs and even a mix of brands in them [a tweakers heaven to get that to crunch on all cylinders].
Still, if the card allowed to be subdivided into multiple logic processing units and do 2 or more simultaneous, I'd not have trouble with clocking 2 times [is a HT capable CPU not doing the same to acknowledged runtime?] At the end of the day, having found how I can tell my NVidia card to stay at maximum performance, a serious blazing GUI experience generating right from the first nanosecond, it's the number of results returned we're all after. In BOINC world only cobblestones count, WCG will surely do it's thing to have the runtime tally be equitable, and maybe set new standards how to reasonably compensate in inconvertibles. We'll work through whatever comes up to get an amenable solution. Silently, the techs surely will pick out the valid and reasonable points from the discussions and do the testing in the background in their labs. :D --//-- |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Having read through this thread quite intensely, but also not posting my own opinions, it reminds me of games like Ogame and World of Warcraft, where people constantly discuss between themselves game balance, and they create month long, heated discussions with over 1000 posts... when the third post was by the admin saying "we know it sucks, but we're not going to change it because of X"
After a few months, the thread dies down, and the game is as popular as ever. Personally, I'm in this to do good. I don't brag to anyone IRL about the size of my contribution, but it is a good way for me to see how I contribute against most other people, and it is fun to have them. If I wanted to sky rocket my run time, I could without spending a cent. But because it doesn't benefit WCG, why would I? By all means, get involved in the reward system. have fun competing - just don't forget you joined to help humanity. |
||
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
tomast, I stick to air cooling, but it’s as much a location thing as optimizing the hardware. Air is cheaper and less complicated, but you do need to clean the GPU fairly regularly. I have used both NVidia and AMD/ATI at MW but ATI’s superior DP makes it king there. I have also used both for Folding, but the performance of GPU’s is poor there compared to CPU’s.
----------------------------------------The GTX 560 Ti Classified 448 Core is an excellent card. Due to DP insufficiency Fermi's do not perform very well at MW, but it would still be a reasonable performer there. In SP (including here) it will perform much better. If it has a dual fan your 448 is probably going to run reasonably cool, but I always use fan controlling software, where possible, and I recommend everyone does this on high end cards. For Windows you can use NVidia System Tools, MSI Afterburner, EVGA Precision... PS. Each time you restart a Linux system you need to set the GPU to prefer maximum performance, from NVidia control panel, and I think Coolbits still doesn't work for Fermi's. [Edit 1 times, last edit by skgiven at Dec 11, 2011 10:10:39 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Thanks skgiven
I didn't realize about the Max performance setting. Will make sure to set at boot. Also the strange thing is Coolbits "1" does not seem to work for this card. But Coolbits "4" does work. At least I'll be able to pump up the fan speed as needed ![]() I was looking at the effect of setting Max performance... First my Kill-a-Watt went from 225 up to 285. Second the Graphics Clock went from 50 Mhz up to 797 Mhz. !!! ![]() Thirdly the Temp went from 31C up to 49C . I wonder if it's gonna go much higher when it is actually crunching... ( In Win 7 gaming at Max settings it never went past 60C ) So anyway , apparently at least in Linux that Max performance setting is Very important so the card won't be throttling up and down. Thanks again |
||
|
knreed
Former World Community Grid Tech Joined: Nov 8, 2004 Post Count: 4504 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
First of all, thank you all for discussing in this thread the various tradeoffs in different approaches. There are flaws in all of our measurements (points, runtime, results) as they can all reflect different things rather than just relative contribution.
----------------------------------------We are focused on making GPU work for our volunteers. This is priority number one. I had originally hoped to tackle other items on our to-do list with the work as well (such as making 1 WCG point = 1 BOINC credit that is still leftover from when we shutdown UD). However, we are putting a stake in the ground to update the server code and shortly thereafter start beta testing GPU. As a result instead of trying to solve everything, we are only doing the essential so that we can get it running. An example of this is that the new credit logic from BOINC appears to work quite well for consistently granting credit across all platforms (in alpha the ATI and Nividia GPU versions are claiming within 5% of Linux CPU which is claiming within 5% of Windows ) provided the flops estimates are consistent (if they aren't, then things behave poorly). There is a short 'ramp up' phase were baseline stats are collected where things are a little out of alignment, but that is short lived. So we are simply going to jump into using the new credit logic. There is going to be a disconnect from credit before the server code update and credit after the server code update. Our testing suggests it will be pretty close, but we will watch it after launch. If a more credit is awarded, then we will probably leave it (this is a request from David Anderson to be consistent with the overall BOINC community). If it is less, we will probably tweak it up some to be consistent with the old credit we awarded. Once we have stabilized things after the server code update and we see how things are doing in beta testing we will look at 'balance' issues. As an early preview on our thoughts, credit/points has always been the metric most designed to reflect the contributed computing power directly (and thus reflects whether you are contributing time from PII or a i7) while runtime reflects the time that a workunit was allowed to be running on a resource. Any policy changes that we make need to not only consider cpu vs gpu, but also consider the impact of contribution from Android on ARM and computers with cpus that have cores that perform at different speeds that will be part of the environment over the next 18-24 months. Additionally, given the explosion of cores on machines, we are starting to look at if any of the apps we have run or will be running can be multi-threaded and use more than one cpu cores (note - a lot of the apps we have are not designed to be multi-threaded so this can only be done if the app itself is already multi-threaded). This will become important to reduce the RAM being used on volunteers machines with many concurrently running tasks. We expect to have a range computers that have a 100-1000 fold performance difference between them over the next 24 months. There are going to be a lot of challenges determining what is 'fair' and yet be motivational for all contributors over that period. [Edit 1 times, last edit by knreed at Dec 12, 2011 4:21:44 PM] |
||
|
Former Member
Cruncher Joined: May 22, 2018 Post Count: 0 Status: Offline |
Awesome ! Can't wait for the Beta's
----------------------------------------![]() When/If they come out of Beta to full production , will I be able to opt in on HCC GPU and not HCC CPU ? I wish I could run other projects on CPU while HCC is grinding away on GPU... [Edit 1 times, last edit by Former Member at Dec 12, 2011 5:33:47 PM] |
||
|
sk..
Master Cruncher http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif Joined: Mar 22, 2007 Post Count: 2324 Status: Offline Project Badges: ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Thanks for the update. Good luck with the server and credit changes. Good to know you are pondering the performance impact of such future devices, trying to facilitate greater variety and systems with relatively limited memory.
----------------------------------------tomast, I would expect you will be able to use the GPU for HCC and select to not use the CPU for HCC; on Boinc projects that allow you to select to use the GPU, or not, you can also select to not use the CPU (MW for example). Also, HCC might not even be available for CPU's when the GPU project starts (unless the same app can run on the CPU) and even if it was you can presently deselect HCC for CPU. Cheers for the coolbits 4 feedback ;) [Edit 1 times, last edit by skgiven at Dec 13, 2011 12:17:33 AM] |
||
|
|
![]() |