Index  | Recent Threads  | Unanswered Threads  | Who's Active  | Guidelines  | Search
 

Quick Go ยป
No member browsing this thread
Thread Status: Active
Total posts in this thread: 822
Posts: 822   Pages: 83   [ Previous Page | 8 9 10 11 12 13 14 15 16 17 | Next Page ]
[ Jump to Last Post ]
Post new Thread
Author
Previous Thread This topic has been viewed 914341 times and has 821 replies Next Thread
JKarhu
Cruncher
Joined: Apr 5, 2021
Post Count: 5
Status: Offline
Reply to this Post  Reply with Quote 
Re: Work unit availability

Ah would have never realised if you haven't pointed out the G at the end of the string! Thank you very much!
[Apr 8, 2021 9:31:33 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Mumak
Senior Cruncher
Joined: Dec 7, 2012
Post Count: 477
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

I'm trying to catch a task on this GPU (Intel DG1):
OpenCL: Intel GPU 0: Intel(R) Iris(R) Xe MAX Graphics (driver version 27.20.100.9168, device version OpenCL 3.0 NEO, 6456MB, 6456MB available, 1190 GFLOPS peak)


All settings seem to be OK, I have other machines that are getting WUs, but haven't seen a single GPU task on the Xe yet.
Requesting new tasks for Intel GPU
Scheduler request completed: got 0 new tasks


So I'm starting to wonder whether there's perhaps some limit/filter on the scheduler side.

Here some more details from the coproc_info:
<intel_gpu_opencl>
<name>Intel(R) Iris(R) Xe MAX Graphics</name>
<vendor>Intel(R) Corporation</vendor>
<vendor_id>32902</vendor_id>
<available>1</available>
<half_fp_config>63</half_fp_config>
<single_fp_config>63</single_fp_config>
<double_fp_config>0</double_fp_config>
<endian_little>1</endian_little>
<execution_capabilities>1</execution_capabilities>
<extensions>cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_intel_subgroups cl_intel_required_subgroup_size cl_intel_subgroups_short cl_khr_spir cl_intel_accelerator cl_intel_driver_diagnostics cl_khr_priority_hints cl_khr_throttle_hints cl_khr_create_command_queue cl_intel_subgroups_char cl_intel_subgroups_long cl_khr_il_program cl_intel_mem_force_host_memory cl_khr_subgroup_extended_types cl_khr_subgroup_non_uniform_vote cl_khr_subgroup_ballot cl_khr_subgroup_non_uniform_arithmetic cl_khr_subgroup_shuffle cl_khr_subgroup_shuffle_relative cl_khr_subgroup_clustered_reduce cl_intel_spirv_media_block_io cl_intel_spirv_subgroups cl_khr_spirv_no_integer_wrap_decoration cl_intel_unified_shared_memory_preview cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_intel_planar_yuv cl_intel_packed_yuv cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_image2d_from_buffer cl_khr_depth_images cl_khr_3d_image_writes cl_intel_media_block_io cl_khr_gl_sharing cl_khr_gl_depth_images cl_khr_gl_event cl_khr_gl_msaa_sharing cl_intel_dx9_media_sharing cl_khr_dx9_media_sharing cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_intel_d3d11_nv12_media_sharing cl_intel_unified_sharing cl_intel_subgroup_local_block_io cl_intel_simultaneous_sharing </extensions>
<global_mem_size>6769606656</global_mem_size>
<local_mem_size>65536</local_mem_size>
<max_clock_frequency>1550</max_clock_frequency>
<max_compute_units>96</max_compute_units>
<nv_compute_capability_major>0</nv_compute_capability_major>
<nv_compute_capability_minor>0</nv_compute_capability_minor>
<amd_simd_per_compute_unit>0</amd_simd_per_compute_unit>
<amd_simd_width>0</amd_simd_width>
<amd_simd_instruction_width>0</amd_simd_instruction_width>
<opencl_platform_version>OpenCL 3.0 </opencl_platform_version>
<opencl_device_version>OpenCL 3.0 NEO </opencl_device_version>
<opencl_driver_version>27.20.100.9168</opencl_driver_version>
<device_num>0</device_num>
<peak_flops>1190400000000.000000</peak_flops>
<opencl_available_ram>6769606656.000000</opencl_available_ram>
<opencl_device_index>0</opencl_device_index>
<warn_bad_cuda>0</warn_bad_cuda>
</intel_gpu_opencl>

----------------------------------------

----------------------------------------
[Edit 1 times, last edit by Mumak at Apr 9, 2021 7:24:51 AM]
[Apr 9, 2021 7:20:40 AM]   Link   Report threatening or abusive post: please login first  Go to top 
ShooterQ
Cruncher
Joined: Feb 28, 2021
Post Count: 2
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

As of right now, I also found it helpful to filter for "Valid" as my standard results completed in the same window of time have yet to be validated. With that filter, all of my OPNG results are on the first page. That may not apply to everyone though.

Curiously, my Macbook Pro with no discrete graphics received more OPNG work than any of my other computers. Guess I should disable that option.
[Apr 9, 2021 7:59:37 AM]   Link   Report threatening or abusive post: please login first  Go to top 
Sphynxx
Cruncher
Joined: Nov 24, 2010
Post Count: 47
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

Is the 1700 wus/200 batches/30 minutes the limit of production or will there be more as demand is determined? It's pretty apparent that none of us are getting as many wus as we are willing and able to crunch. Sorry if this question was already asked.
----------------------------------------

[Apr 9, 2021 12:49:44 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Former Member
Cruncher
Joined: May 22, 2018
Post Count: 0
Status: Offline
Reply to this Post  Reply with Quote 
Re: Work unit availability

The project is just starting out slow to verify and test the supporting infrastructure/processes. Once it is determined that everything is well, the project will scale up the work unit distribution.
[Apr 9, 2021 12:57:23 PM]   Link   Report threatening or abusive post: please login first  Go to top 
nanoprobe
Master Cruncher
Classified
Joined: Aug 29, 2008
Post Count: 2998
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

I still have PVs from day 1. How could someone have that many or be that slow?
----------------------------------------
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.


[Apr 9, 2021 3:51:50 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Ian-n-Steve C.
Senior Cruncher
United States
Joined: May 15, 2020
Post Count: 180
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

I still have PVs from day 1. How could someone have that many or be that slow?


such is the life of BOINC cross-validation. some people like to download tasks and shut off their computer for a month. in these cases you just need to wait for the deadline to pass and get the task sent to another host, taking the risk that the next host doesnt do the same thing. It's not uncommon for a WU to be replicated 4, 5, 6 + times before it finally lands on two hosts willing to actually process it.
----------------------------------------

EPYC 7V12 / [5] RTX A4000
EPYC 7B12 / [5] RTX 3080Ti + [2] RTX 2080Ti
EPYC 7B12 / [6] RTX 3070Ti + [2] RTX 3060
[2] EPYC 7642 / [2] RTX 2080Ti
[Apr 9, 2021 3:59:56 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Eternum
Cruncher
Joined: Mar 31, 2020
Post Count: 1
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

I am observing strange behavior in relation to (or so I think) OPNG tasks across different computers.

I have several machines that are set to use GPU and several where GPU usage is disabled (GPU does not exist from BOINC app standpoint). On those PCs where GPU usage is disabled, I keep getting CPU tasks (OPN1, MIP1, etc) on a regular basis, i.e. business as usual. However, those machines where GPU usage is enabled are not getting any CPU tasks for days and I see the following messages in the Event Log:
No tasks are available for OpenPandemics - COVID 19
No tasks are available for OpenPandemics - COVID-19 - GPU
No tasks are available for Microbiome Immunity Project

Apparently, these statements are not correct since my other computers are getting OPN and MIP tasks without issues. I wonder if OPN GPU task availability somehow affects ability to get CPU tasks for machines in question? I really have no logical explanation for this strange phenomenon.

Can someone with inside knowledge of WCG functionality please shed some light on this? Thank you.
----------------------------------------
[Edit 2 times, last edit by Eternum at Apr 9, 2021 6:11:15 PM]
[Apr 9, 2021 5:40:22 PM]   Link   Report threatening or abusive post: please login first  Go to top 
goben_2003
Advanced Cruncher
Joined: Jun 16, 2006
Post Count: 145
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

They would still get that message when they have filled their queue of tasks if they are getting CPU tasks. If I understand your post correctly though, the machines that have GPU usage enabled are not getting any CPU tasks. If they are not getting CPU tasks at all:

FWIW, there have been a few times that I thought the schedule requester was not doing what it should. Each time I enabled sched_op_debug to gain insight into the matter. The result was that it pointed me towards some setting that was not set to what I thought it was set to.

One of the ways to enable sched_op_debug:
Boinc Manager -> Options -> Event Log options -> check "sched_op_debug"
----------------------------------------

[Apr 9, 2021 5:56:43 PM]   Link   Report threatening or abusive post: please login first  Go to top 
nanoprobe
Master Cruncher
Classified
Joined: Aug 29, 2008
Post Count: 2998
Status: Offline
Project Badges:
Reply to this Post  Reply with Quote 
Re: Work unit availability

I am observing strange behavior in relation to (or so I think) OPNG tasks across different computers.

I have several machines that are set to use GPU and several where GPU usage is disabled (GPU does not exist from BOINC app standpoint). On those PCs where GPU usage is disabled, I keep getting CPU tasks (OPN1, MIP1, etc) on a regular basis, i.e. business as usual. However, those machines where GPU usage is enabled are not getting any CPU tasks for days and I see the following messages in the Event Log:
No tasks are available for OpenPandemics - COVID 19
No tasks are available for OpenPandemics - COVID-19 - GPU
No tasks are available for Microbiome Immunity Project

Apparently, these statements are not correct since my other computers are getting OPN and MIP tasks without issues. I wonder if OPN GPU task availability somehow affects ability to get CPU tasks for machines in question? I really have no logical explanation for this strange phenomenon.

Can someone with inside knowledge of WCG functionality please shed some light on this? Thank you.

If you have machines set to receive GPU tasks only and "Allow research to run on my CPU" set to no in the profile for those machines then you will not receive CPU tasks from any projects on those machines.
----------------------------------------
In 1969 I took an oath to defend and protect the U S Constitution against all enemies, both foreign and Domestic. There was no expiration date.


----------------------------------------
[Edit 1 times, last edit by nanoprobe at Apr 9, 2021 6:51:06 PM]
[Apr 9, 2021 6:48:12 PM]   Link   Report threatening or abusive post: please login first  Go to top 
Posts: 822   Pages: 83   [ Previous Page | 8 9 10 11 12 13 14 15 16 17 | Next Page ]
[ Jump to Last Post ]
Post new Thread