bibrak@bibrak-laptop:/media/Academics/Academic/Research/HPC/CUDA/JCUDA$ java -cp .:$JCUDA_HOME/jcublas-0.2.3.jar:$JCUDA_HOME/jcuda-0.2.3.jar:$JCUDA_HOME/jcudpp-0.2.3.jar:$JCUDA_HOME/jcufft-0.2.3.jar JCudaDriverCubinSample
Error while loading native library with base name “JCudaDriver”
Operating system name: Linux
Architecture : i386
Architecture bit size: 32
Stack trace:
java.lang.UnsatisfiedLinkError: /media/Academics/Academic/Research/HPC/CUDA/JCUDA/JCuda-All-0.2.3-bin-linux-x86/libJCudaDriver-linux-x86.so: /media/Academics/Academic/Research/HPC/CUDA/JCUDA/JCuda-All-0.2.3-bin-linux-x86/libJCudaDriver-linux-x86.so: undefined symbol: cuGLUnregisterBufferObject
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1778)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1703)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at jcuda.LibUtils.loadLibrary(LibUtils.java:53)
at jcuda.driver.JCudaDriver.(JCudaDriver.java:82)
at JCudaDriverCubinSample.main(JCudaDriverCubinSample.java:36)
I have to admit that I did not have the chance to test the driver API under Linux, because on Linux I can only do tests in the emulation mode. So it is possible that there is a problem with the precompiled library.
However, which version of CUDA are you currently using? According to the online documentation, the function “cuGLUnregisterBufferObject” is deprecated in version 3.0. Maybe there is a problem when trying to access this function with the current version of JCuda, which is targeted for CUDA 2.3.
I’ll try to investigate this error further and provide an updated version for CUDA 3.0 as soon as possible.
Hm … It’s a pity, but I can hardly test this: I’ve only one Linux installation on a non-CUDA PC.
Hopefully, someone who has a CUDA-capable Linux PC can share his experiences with using the GL interoperation using the Driver API here…?
So I’ll have to do some guesses: Could you compile and start the original GL-interoperability samples that have been provided by NVIDIA (e.g. the SimpleGL sample?)
Possibly, you also have to add some GL-libraries to the LD_LIBRARY_PATH, so that they can be found at runtime? I can only guess this from the makefile:
# OpenGL is used or not (if it is used, then it is necessary to include GLEW)
ifeq ($(USEGLLIB),1)
ifneq ($(DARWIN),)
OPENGLLIB := -L/System/Library/Frameworks/OpenGL.framework/Libraries -lGL -lGLU $(GL_LIB_PATH)/libGLEW.a
else
OPENGLLIB := -lGL -lGLU -lX11 -lXi -lXmu
ifeq "$(strip $(HP_64))" ""
OPENGLLIB += -lGLEW -L/usr/X11R6/lib
else
OPENGLLIB += -lGLEW_x86_64 -L/usr/X11R6/lib64
endif
endif
CUBIN_ARCH_FLAG := -m64
endif
ifeq ($(USEGLUT),1)
ifneq ($(DARWIN),)
OPENGLLIB += -framework GLUT
else
OPENGLLIB += -lglut
endif
endif
ifeq ($(USEPARAMGL),1)
PARAMGLLIB := -lparamgl$(LIBSUFFIX)
endif
ifeq ($(USERENDERCHECKGL),1)
RENDERCHECKGLLIB := -lrendercheckgl$(LIBSUFFIX)
endif
It seems that it also will look for
libGL.so
libGLU.so
libX11.so
libXi.so
libXmu.so
libGLEW.so
and libraries in the /usr/X11R6/lib directory. I assume that this path (and all paths containing the lib*.so’s) will have to be added to the LD_LIBRARY_PATH.
Sorry that I can’t give you more detailed information, I’m not a linux expert, and have mainly installed it in order to be able to at least provide these pre-built-binaries…
It would be good to know if the cuGLUnregisterBufferObject issue was resolved…?
The <<<…>>> calling convention is for the CUDA Runtime API. To execute own Kernels from JCuda, the CUDA Driver API has to be used. The parameters that are given in the <<< brackets >>> are <<< Dg, Db, Ns, S >>> where
Dg = Grid size
Db = Block size
Ns = bytes in shared memory per block
S = the associated stream
In most cases, only the first ones are required, and these may be specified using the cuFuncSetBlockShape and cuLaunchGrid functions.
The most simple application of the Driver API (the “Hello World” program) is the “JCudaDriverCubinSample.java” from http://jcuda.org/samples/samples.html
Under linux I could only test the runtime API. If even cuInit() produces an error, there might be a general problem with the driver library. Currently, I can hardly imagine a reason for that, because you said that the runtime library was working, and the runtime- and driver library are basically compild with the same makefile.
But I will have a closer look at that - unfortunately not earlier than next weekend, because currently I have no access to a linux machine at all. (I’m currently working to update JCuda for CUDA 3.0 support, and intended to build the linux binaries next weekend, but first I’ll try to solve this obviously very general issue).
It ist not possible to execute own kernels from .CU files using the Runtime API.
Maybe the problems with the Driver library are related to different Linux versions. The library is compiled on OpenSUSE 11.1. You might want to recompile the driver library for your system. A makefile is provided with the source code.
It would be helpful to know if anybody else had problems using the driver API from Linux. Although I have not received similar complaints yet, this certainly does not mean that it worked for everybody.
I’m slowly running out of ideas, but another try: You may try to ‘make’ it with
make USEDRVAPI=1
This flag should probably be enabled in the driver library makefile by default. I really have to take a closer look at this (i.e. linux and makefiles) as soon as possible… :o
I will upload the adjusted makefile and library ASAP. (If you could send me the compiled driver library via mail, I could do this immediately, otherwise I’ll do this next weekend, when I’m back at my linux PC…)
I also prepared a release that supports CUDA 3.0, but I’ll do some more tests and possibly wait until NVIDIA publishes the NON-beta version of CUDA 3.0. Currently, CUDA 3.0 is not yet fully documented, and it seems that some features are about to be introduced but not yet available in the beta…