Is there anyway to monitor the usage of gpu memory when running kernels?
Like for example the constant memory or the local memory or whatever else there is?
I’m asking this because when we try to run a set of kernels on older cards they crash a lot with CL_OUT_OF_RESOURCES, i know the normal reaction to this error is to check if you have out of bound read or writes intro memory but this could hardly be the case.
The kernels never crash on cpu, they run perfectly on newer gpu’s like the gtx 660Ti, GT640 and an amd radeon hd 7770. At least they didn’t crash there yet, but i’m planning on doing a full on stress test to verify the non crashing newer gpu’s.
Is it possible that you can get overflows by using local, private, constant or any other memory available on the gpu? And that that’s a possible reason why the older cards crash a lot because their memory isn’t that high?
I would like to know if there is a way to track the usage of these specific memory types. Or if there are guidelines specified for using these special types of memory so I can try it out and cross out possible sources of the crash.
thanks in advance!