JCUDA -- undefined symbol: cuGLUnregisterBufferObject

bibrak@bibrak-laptop:/media/Academics/Academic/Research/HPC/CUDA/JCUDA$ java -cp .:$JCUDA_HOME/jcublas-0.2.3.jar:$JCUDA_HOME/jcuda-0.2.3.jar:$JCUDA_HOME/jcudpp-0.2.3.jar:$JCUDA_HOME/jcufft-0.2.3.jar JCudaDriverCubinSample
Error while loading native library with base name “JCudaDriver”
Operating system name: Linux
Architecture : i386
Architecture bit size: 32
Stack trace:
java.lang.UnsatisfiedLinkError: /media/Academics/Academic/Research/HPC/CUDA/JCUDA/JCuda-All-0.2.3-bin-linux-x86/libJCudaDriver-linux-x86.so: /media/Academics/Academic/Research/HPC/CUDA/JCUDA/JCuda-All-0.2.3-bin-linux-x86/libJCudaDriver-linux-x86.so: undefined symbol: cuGLUnregisterBufferObject
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1778)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1703)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at jcuda.LibUtils.loadLibrary(LibUtils.java:53)
at jcuda.driver.JCudaDriver.(JCudaDriver.java:82)
at JCudaDriverCubinSample.main(JCudaDriverCubinSample.java:36)

Any solution ?

Hello,

I have to admit that I did not have the chance to test the driver API under Linux, because on Linux I can only do tests in the emulation mode. So it is possible that there is a problem with the precompiled library.

However, which version of CUDA are you currently using? According to the online documentation, the function “cuGLUnregisterBufferObject” is deprecated in version 3.0. Maybe there is a problem when trying to access this function with the current version of JCuda, which is targeted for CUDA 2.3.

I’ll try to investigate this error further and provide an updated version for CUDA 3.0 as soon as possible.

bye
Marco

I am using CUDA 2.3

Hm … It’s a pity, but I can hardly test this: I’ve only one Linux installation on a non-CUDA PC.

Hopefully, someone who has a CUDA-capable Linux PC can share his experiences with using the GL interoperation using the Driver API here…?

So I’ll have to do some guesses: Could you compile and start the original GL-interoperability samples that have been provided by NVIDIA (e.g. the SimpleGL sample?)

Possibly, you also have to add some GL-libraries to the LD_LIBRARY_PATH, so that they can be found at runtime? I can only guess this from the makefile:


# OpenGL is used or not (if it is used, then it is necessary to include GLEW)
ifeq ($(USEGLLIB),1)

	ifneq ($(DARWIN),)
		OPENGLLIB := -L/System/Library/Frameworks/OpenGL.framework/Libraries -lGL -lGLU $(GL_LIB_PATH)/libGLEW.a
	else
		OPENGLLIB := -lGL -lGLU -lX11 -lXi -lXmu

		ifeq "$(strip $(HP_64))" ""
			OPENGLLIB += -lGLEW -L/usr/X11R6/lib
		else
			OPENGLLIB += -lGLEW_x86_64 -L/usr/X11R6/lib64
		endif
	endif

	CUBIN_ARCH_FLAG := -m64
endif

ifeq ($(USEGLUT),1)
	ifneq ($(DARWIN),)
		OPENGLLIB += -framework GLUT
	else
		OPENGLLIB += -lglut
	endif
endif

ifeq ($(USEPARAMGL),1)
	PARAMGLLIB := -lparamgl$(LIBSUFFIX)
endif

ifeq ($(USERENDERCHECKGL),1)
	RENDERCHECKGLLIB := -lrendercheckgl$(LIBSUFFIX)
endif

According to the lines


OPENGLLIB := -lGL -lGLU -lX11 -lXi -lXmu
...
OPENGLLIB += -lGLEW -L/usr/X11R6/lib

It seems that it also will look for
libGL.so
libGLU.so
libX11.so
libXi.so
libXmu.so
libGLEW.so
and libraries in the /usr/X11R6/lib directory. I assume that this path (and all paths containing the lib*.so’s) will have to be added to the LD_LIBRARY_PATH.

Sorry that I can’t give you more detailed information, I’m not a linux expert, and have mainly installed it in order to be able to at least provide these pre-built-binaries…

Hi,

Is there any other way, how to call the kernel function,
is there any helloword program
is there any like <<< … >>> calling way…

It would be good to know if the cuGLUnregisterBufferObject issue was resolved…?

The <<<…>>> calling convention is for the CUDA Runtime API. To execute own Kernels from JCuda, the CUDA Driver API has to be used. The parameters that are given in the <<< brackets >>> are <<< Dg, Db, Ns, S >>> where
Dg = Grid size
Db = Block size
Ns = bytes in shared memory per block
S = the associated stream

In most cases, only the first ones are required, and these may be specified using the cuFuncSetBlockShape and cuLaunchGrid functions.

The most simple application of the Driver API (the “Hello World” program) is the “JCudaDriverCubinSample.java” from http://jcuda.org/samples/samples.html

it means we can only call our own kernels from driver API.
And we can not call them until cuGLUnregisterBufferObject issue is not solved.

even - JCudaDriver.cuInit(0);
this important function doesn’t work it gives this error.

Marco has you tested it on your Linux, may be there is a problem at my end.

Under linux I could only test the runtime API. If even cuInit() produces an error, there might be a general problem with the driver library. Currently, I can hardly imagine a reason for that, because you said that the runtime library was working, and the runtime- and driver library are basically compild with the same makefile.

But I will have a closer look at that - unfortunately not earlier than next weekend, because currently I have no access to a linux machine at all. (I’m currently working to update JCuda for CUDA 3.0 support, and intended to build the linux binaries next weekend, but first I’ll try to solve this obviously very general issue).

I have only managed to run the JCudaRuntimeSample.java

and a tittle deviceQuery thing

/*
JCuda.cudaGetDevice(temp);
int device = temp[0];

    System.out.println("The device number : "+device);

    cudaDeviceProp cp = new cudaDeviceProp();

    JCuda.cudaGetDeviceProperties(cp, device);

    System.out.println(cp.toFormattedString());
    */

but how to call my kernel if DriverAPI is not working.

Can you please provide a program with RunTimeAPI kernel calls

like using cuFuncSetBlockShape and cuLaunchGrid functions.

thanks

It ist not possible to execute own kernels from .CU files using the Runtime API.

Maybe the problems with the Driver library are related to different Linux versions. The library is compiled on OpenSUSE 11.1. You might want to recompile the driver library for your system. A makefile is provided with the source code.

It would be helpful to know if anybody else had problems using the driver API from Linux. Although I have not received similar complaints yet, this certainly does not mean that it worked for everybody.

with slight changes in the makefile

Includes

JNI_INCLUDES = /usr/lib/jvm/java-6-sun-1.6.0.15/include/

INCLUDES += -I. -I./src -I$(CUDA_INSTALL_PATH)/include -I$(JNI_INCLUDES) -I$(JNI_INCLUDES)/linux $(CUDPP_INCLUDES)

I have compiled and got the .so library, and placed it in their positions.
but no improvment :frowning:

I’m slowly running out of ideas, but another try: You may try to ‘make’ it with
make USEDRVAPI=1
This flag should probably be enabled in the driver library makefile by default. I really have to take a closer look at this (i.e. linux and makefiles) as soon as possible… :o

That works :slight_smile:

Thanks

Marco

Great! :slight_smile:

I will upload the adjusted makefile and library ASAP. (If you could send me the compiled driver library via mail, I could do this immediately, otherwise I’ll do this next weekend, when I’m back at my linux PC…)

I also prepared a release that supports CUDA 3.0, but I’ll do some more tests and possibly wait until NVIDIA publishes the NON-beta version of CUDA 3.0. Currently, CUDA 3.0 is not yet fully documented, and it seems that some features are about to be introduced but not yet available in the beta…