![]() In this way you offload the problem of resource scheduling to the BOINC client, for which it's quite well designed to do, and for free. It may be useful to think of having GPU WU's be batches of multiple HDF5 runs, such that you can then have a 'supervisory' app that would spawn as many parallel pipelines as optimally efficient for the given GPU, but it could be an increase in complexity for little return.Īn option would be to do something similar to where GPU utilization factor can be set in user preferences, instead of having to make our own `app_info.xml` file. Perhaps some optimization to the PyTorch configuration could help to make use of additional resource headroom, but in general it may just be the case that since training is a purely iterative process, that this is just the best we'll get. My bet is that it has to do with PyTorch moreso than anything. ![]() That being said on a single task I was witnessing GPU utilization around 40-60%, so adding another task has the benefit of using the additional headroom (as far as the kernel sizes allocate of course). GPU Utilization is pretty low, but I have seen significant gains when running 2 GPU tasks per GPU. Just for one example, PrimeGrid incentivizes running 'unpopular' or project-preferred applications by granting a credit bonus for those applications. It's a bit disappointing that to run MLC effectively, one must put the host PC in a condition of outputting far less work than with other combinations of projects, which tends to make it less attractive as a go-to everyday project to run. While the goal of having credits be equivalent/comparable between projects has long driven changes in BOINC, projects understandably resist any loss of their ability to set their own credit levels. It doesn't appear that cores are the actual problem though, it may be that the MLC WUs are occupying a lot of 元 cache or something similar. Usually leaving 2 physical cores open should be enough to satisfy most GPU apps. Some other GPU apps have this tendency as well, but not to the degree seen here. Typically using more than half the CPU cores for any other purpose while crunching MLC will result in a drastic performance hit. Second, even though the app can't saturate the GPU, it seems to need a great deal of other PC resources to run as well as it can. There may be some performance being left on the table. Even trying to run two tasks concurrently does not seem to help a whole lot. I don't know if anything can be done about it, but it might be worth mentioning.įirst, compared to most other GPU projects, GPU utilization is poor. After running the GPU app in both Windows and Linux, I have a couple of observations.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |