Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go ยป
No member browsing this thread
Thread Status: Active
Total posts in this thread: 183
Posts: 183   Pages: 19   [ Previous Page | 7 8 9 10 11 12 13 14 15 16 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 19050 times and has 182 replies Next Thread
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

Hi, sorry for my bad english, I have an XFX ATI HD 5970, can I use it for some WCG project? Thanks in advice.
I'm actually crunching with my I7 920 @ 3.5GHZ :)


Not at the moment, they are working on a Beta that will include using a gpu, on an opt-in basis, but nothing announced other than that yet, especially no official time frames yet either! We users were hoping it would be earlier this year, but the WCG is not ready yet! It is ONLY March, we have a long time of crunching ahead of us yet!

Thanks for your reply!! wink
[Mar 2, 2012 10:58:44 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

Think this is the "right" thread to discuss/present: What BOINC sees a GTX 680 as... info snagged from the testers mail list:

23/03/2012 14:12:49 | | Starting BOINC client version 6.12.34 for windows_x86_64
23/03/2012 14:12:49 | | OS: Microsoft Windows 7: Ultimate x64 Edition, Service Pack 1, (06.01.7601.00)
23/03/2012 14:12:49 | | NVIDIA GPU 0: GeForce GTX 680 (driver version 30110, CUDA version 4020, compute capability 3.0, 2048MB, 361 GFLOPS peak)

23/03/2012 19:26:49 | | Starting BOINC client version 7.0.18 for windows_x86_64
23/03/2012 19:26:49 | | OS: Microsoft Windows 7: Ultimate x64 Edition, Service Pack 1, (06.01.7601.00)
23/03/2012 19:26:49 | | NVIDIA GPU 0: GeForce GTX 680 (driver version 301.10, CUDA version 4.20, compute capability 3.0, 2048MB, 1899MB available, 50 GFLOPS peak)
23/03/2012 19:26:49 | | OpenCL: NVIDIA GPU 0: GeForce GTX 680 (driver version 301.10, device version OpenCL 1.1 CUDA, 2048MB, 1899MB available)

His feeling is that 361 GFLOPS peak under BOINC v6.12.34 is an under-estimate, and 50 GFLOPS peak under BOINC v7 is certainly wrong.

We need a handler for CC 3.0, and some numbers to plug into it.

In code, he is getting

prop.major == 3
prop.minor == 0
prop.clockRate == 705500 CU_DEVICE_ATTRIBUTE_CLOCK_RATE
prop.multiProcessorCount == 8 CU_DEVICE_ATTRIBUTE_MULTIPROCESSOR_COUNT

>From http://www.geforce.com/Active/en_US/en_US/pdf...-680-Whitepaper-FINAL.pdf, I'm reading that same multiprocessor count, and the suggestion that cores_per_proc should be 128 - though that doesn't square with the advertised total shader count of 1536.

There seems to be a problem with speed detection too - the detected 705.5 MHz is also below the advertised rate.

Again, I'm advised that *with effect from NV API 295*, the call to "NVAPI_GPU_GetPerfClocks doesn't return correct information anymore, you need to switch to the new NvAPI_GPU_GetAllClockFrequencies function". That suggests that, where available, the new call should be used - though the details of the call specification may still be restricted to NDA documentation - but the older code path needs to be retained for legacy compatibility on hosts with older driver versions.

Not that it is much relevant to the method of credit awarding at WCG... it's benchmarked against the CPU version of the same set of WU's as used for GPU testing... though there's still some work to do to have the new system abide.

Was watching the bjbdbest 60 minutes Youtube video of Derek Paravicini and then read the comment yesterday "The top CPU's are not up to supporting the top GPU on this project at present" and thought [... very good at a very limit set of processes]

--//--

edit: Of course, the next client will understand the NVidia 3.0 drivers:
The 50 GFLOPS is the default when BOINC can't figure out anything else.
This was the case because it didn't have any code for compute capability 3.x.

I added the following to COPROC_NVIDIA::set_peak_flops():
case 3:
default:
flops_per_clock = 2;
cores_per_proc = 128;
I don't know if these are correct.
It's too bad that NVIDIA doesn't have an API for getting these.

Clock rate: the client uses a function called cuDeviceGetAttribute() to get it.
NvAPI_GPU_GetAllClockFrequencies returned zero hits on Google.

-- David

----------------------------------------
[Edit 2 times, last edit by Former Member at Mar 24, 2012 9:19:41 AM]
[Mar 24, 2012 8:39:59 AM]   Link   Report threatening or abusive post: please login first  Go to top 
oldDirty
Cruncher
Joined: Mar 10, 2009
Post Count: 21
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

In April maybe earlier the new GPU successor of Fermi from Nvidia will be Kepler. The future GTX680 board should be able to beat the top AMD GPU 7970. Wait and see. But if true maybe I will buy one for WCG biggrin

Ok, now we know much better. ^^
The handbrake on this card is so big, it can reach the moon and the stars.
The 7970 is the hottest for GPGPU at the moment from the consumer cards.
--
At the moment is a gtx460 crunchin and a 9500gt ( 32 shader dx10 ) can be added in 45sec. ;)
----------------------------------------

----------------------------------------
[Edit 1 times, last edit by oldDirty at Mar 24, 2012 9:29:59 AM]
[Mar 24, 2012 9:26:50 AM]   Link   Report threatening or abusive post: please login first  Go to top 
sk..
Master Cruncher
http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif
Joined: Mar 22, 2007
Post Count: 2324
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

!
----------------------------------------
[Edit 2 times, last edit by skgiven at Jul 18, 2012 9:27:04 PM]
[Mar 24, 2012 11:37:37 AM]   Link   Report threatening or abusive post: please login first  Go to top 
hendermd
Cruncher
United States
Joined: Apr 30, 2010
Post Count: 29
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

NVidia have also halted PCIE3 driver support, so don't buy a new system to host a GTX680, there is little or no point.

Are you referencing the Techpowerup post ? If yes there is an update.

Update 3/23, 21:56
NVIDIA courteously responded to our article, with a statement. Here's the statement verbatim:

While X79/SNB-E is a native Gen2 platform, some motherboard manufacturers have enabled Gen3 speeds. With our GTX 680 launch drivers, we will only be supporting Gen2 speeds on X79/SNB-E while we work on validating X79/SNB-E at these faster speeds. Native Gen3 chipsets (like Ivy Bridge) will still run at full Gen3 speeds with our launch drivers.

GeForce GTX 680 supports PCI Express 3.0. It operates properly within the SIG PCI Express Specification and has been validated on multiple upcoming PCI Express 3.0 platforms. Some motherboard manufacturers have released updated SBIOS to enable the Intel X79/SNB-E PCI Express 2.0 platform to run at up to 8GT/s bus speeds. NVIDIA is currently working to validate X79/SNB-E with GTX 680 at these speeds with the goal of enabling 8GT/s via a future software update. Until this validation is complete, the GTX 680 will operate at PCIE 2.0 speeds on X79/SNB-E-based motherboards with the latest web drivers.

----------------------------------------

[Mar 24, 2012 6:26:16 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Jim1348
Veteran Cruncher
USA
Joined: Jul 13, 2009
Post Count: 1066
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

If you have idle GPUs, Folding@home is seeing some real results in the area of Alzheimer's Disease and reducing the side effects of advanced cancer treatments.
http://folding.typepad.com/

Folding works especially well on the Nvidia cards, due to CUDA being somewhat more suitable than OpenGL that AMD uses, but both are very useful. You can both Fold on the GPU and run BOINC/WCG on the CPU without problems. I do it on two PCs running Win7 64-bit, one a dedicated machine and the other for general desktop use. Folding on the display card sometime causes screen lag, so I usually don't do it there, but it depends on the card and other factors. It actually works fine on my GT 430, but not well on my GTX 560s.

(I will be adding an AMD 7000 series card for WCG GPU projects when the beta phase is over with, and keep the Nvidia cards for Folding.)
----------------------------------------
[Edit 1 times, last edit by Jim1348 at Mar 24, 2012 9:19:45 PM]
[Mar 24, 2012 9:16:05 PM]   Link   Report threatening or abusive post: please login first  Go to top 
oldDirty
Cruncher
Joined: Mar 10, 2009
Post Count: 21
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

The GTX680 beats the HD 7970 in a lot of games, basically because it was designed to do just that. So it's a good gaming card. It's not so special when it comes to OpenCL or FP64, where the HD7970 shines, but the GTX680 wasn't intended to be anything other than a gaming card. For high end crunching performance it looks like the GTX680 is limited to CUDA apps, and work is needed by most projects to get the most out of these cards. As the drivers are immature, gaming performance is likely to improve here and there, and some CUDA performance might improve in the future.

I dont thing so.
It belongs to the driver only and nVidia dont want unleash the cl performance for the Retailmarket. They want to sell their Quadro and tesla cards. Dont have any other idea why else.

At the moment is a gtx460 crunchin and a 9500gt ( 32 shader dx10 ) can be added in 45sec. ;)

Saying as there are no Beta GPU tasks at present, your GTX460 isn't crunching here! The 9500 GT is not Compute Capable 1.2 so it wouldn't work anyway.

Ok now i dont understand what you mean, my gtx is not crunching here?
It's crunching for gpugrid ( longruns in about 12h per Wu ), PrimeGrid and Einstein.
The 9500 gt also can run any project, it's just slow. Ok, Milkyway do not anyway. ^^
----------------------------------------

[Mar 24, 2012 9:43:02 PM]   Link   Report threatening or abusive post: please login first  Go to top 
sk..
Master Cruncher
http://s17.rimg.info/ccb5d62bd3e856cc0d1df9b0ee2f7f6a.gif
Joined: Mar 22, 2007
Post Count: 2324
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

!
----------------------------------------
[Edit 1 times, last edit by skgiven at Jul 18, 2012 9:26:39 PM]
[Mar 25, 2012 1:19:24 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Dataman
Ace Cruncher
Joined: Nov 16, 2004
Post Count: 4865
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

During the beta and product launch, I can offer WCG:
8 EVGA GTX260's
4 EVGA GTX460's
2 EVGA GTX570's
1 EVGA GTX285
As I have long ago reached my target for HCC, after the launch I will probably go back to my normal GPU rotation between MW, GPUGrid and Einstein. If I find a binary pulsar, will they name it after me? Dataman's Star. laughing biggrin laughing

cowboy
----------------------------------------


----------------------------------------
[Edit 1 times, last edit by Dataman at Mar 25, 2012 3:52:25 PM]
[Mar 25, 2012 2:45:12 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Jim1348
Veteran Cruncher
USA
Joined: Jul 13, 2009
Post Count: 1066
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: What GPU's would you bring to the WCG table?

In the future the app might change, recommendations might be altered, drivers might improve some GPU's over others, new motherboard architectures might bring improvements, and there will be more GPU's turning up. In a few months when the project goes live (for GPU tasks), cards will be less expensive, you will know exactly what performance/input they will bring. So invest in the right architecture at the right time to optimize your input, and consider the other WCG projects too; 22nm CPU's are due out soon. These will probably do more work, and cost less to run.
All excellent points, and why I am starting out on a lowly GT 430 until I can at least get the software to run. It has yielded a couple of fixes for problems thus far, even before I have gotten any betas. But as a matter of idle curiosity, I am wondering whether ANY of the results have been "correct" thus far. That is, do they match what the CPU version gives? Just because your card doesn't crash or give obvious errors does not mean that the result is scientifically valid. I am not expecting you, or anyone else to provide the answer now, but as we go along it is worth considering.
[Mar 25, 2012 4:10:11 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 183   Pages: 19   [ Previous Page | 7 8 9 10 11 12 13 14 15 16 | Next Page ]
[ Jump to Last Post ]
Post new Thread