Sure, the complexity of C++ with its compatibility issues between libraries (even libraries that are built on the same platform (and even libraries that are built with the same compiler (but different linker settings))) is something that baffled me recently (I (re)started C++ coding for the job, after 15 years of Java). All I can say that all this is (at least for me) a sign that somewhere something went horribly wrong. I’m not sure who to blame (the Standards Committees? The Compiler people? „The community“?, heck, I don’t care. I want to build some library and use it somewhere else…). But I can say for sure that this will not be „fixed“ soon. Efforts like LLVM (and the dozens of other IRs) at least show that even the C++ people begin to recognize that VMs and IRs have their advantages. Just like the Java people did 25 years ago 
So when I said that this should be tackled on the VM level, I meant really long term. There already are approaches that allow accessing platform-specific libraries (e.g. DLLs) using some magic method invocation tricks, without compiling native code at all, but I’m not sure how well they work in the corner cases…
I wouldn’t consider the use of annotations and your presets as „ad hoc“. The most obvious, straightforward approach would be to parse the headers and dump out the JNI method implementations. But the devil is in the „details“ (which in fact aren’t details at all). Namely, the type conversions. This is nicely covered with the @Cast annotation in JavaCPP, which solves quite a bunch of issues. This problem can be stated a bit more generally (or maybe, just a bit more fuzzy: ) One has to know the type mapping. In fact, that’s one of the main things that the code generator that I use internally is all about: The type mapping is configured with some rules. At the moment, this has to be done manually, and separately for the C- and Java-Part, which is a hassle and error prone. E.g. when a native method receives a cudaMemcpyKind
, then I still have to manually add the mapping (pseudocode)
javaWriter.putMapping("cudaMemcpyKind", "int");
jniWriter.putMapping("cudaMemcpyKind", "jint");
:sick: Most of this information could be derived from the header files and some basic rules: When something is typedef’ed as an int
, then in the Java world it’s usually an int
/jint
. There are some details that can not be derived (e.g. how to handle the case when an object is used by reference), but I think that some generalizations can apply here as well. The fact that for JavaCPP all this information is basically summarized at one place, in a machine-processable form in the presets (and the @Cast annotations) simplifies things a lot.
Concerning the parser (also in conjunction with LWJGL and C++ support in general) and the generator: I had a short look at the generator code of JavaCPP, but in fact, the Parser is far more interesting (and I hadn’t looked at this yet - just quickly scrolled over it now). As already mentioned, C++ is a beast - and so is the parser
When I started „generalizing“ my code generation approaches, I did not even dare to tackle this manually. I considered using something like ANTLR and feed it with the C++ grammar, but back then, this was not trivial either. (Today, with ANTLR4, it might actually be a feasible approach). But I thought: Hey, people already did that (and these guys definitely know their stuff and the pitfalls of C++ better than me). So I just ripped the relevant libraries out of https://eclipse.org/cdt/ . It works pretty well, but is still inconvenient to work with (because of the plain complexity of C++). So I’m currently just using these libraries for parsing an AST out of the C/C++ header files, and translating this AST into a very simple Code model. It’s just powerful enough to handle C headers, and does not really support any C++ features, but at least I don’t have to do any manual parsing and still have the complete C++ AST available - so I could extract more information from the AST, if necessary. „Most“ libraries that one might to call from Java (at least, the ones I have been working with) offer a C interface, however: You can’t even pass a std::string
from one DLL to another, anyhow (which is … odd, to say the least…)
Regarding the installable client drivers: I’m not sure whether or how this could influence JavaCPP - I haven’t looked at enough details here, and have to admit that I don’t have a profound background of all the technical details on the native side. But one of the issues I encountered was the binding of methods. In JOCL, the native methods themself are basically implemented as usual. But the actual OpenCL library methods are not called directly. Instead, they are called via function pointers that are obtained at runtime. For example, the clCreateContext
function would usually be JNI’ed like this:
JNIEXPORT jobject JNICALL Java_org_jocl_CL_clCreateContextNative
(JNIEnv *env, jclass UNUSED(cls), jobject properties, jint num_devices, jobjectArray devices, jobject pfn_notify, jobject user_data, jintArray errcode_ret)
{
...
// Directly call the clCreateContext function:
nativeContext = clCreateContext(nativeProperties, nativeNum_devices, nativeDevices, nativePfn_notify, nativeUser_data, &nativeErrcode_ret);
...
}
But instead, a function pointer for this function is defined:
typedef CL_API_ENTRY cl_context
(CL_API_CALL *clCreateContextFunctionPointerType)(
const cl_context_properties *, cl_uint, const cl_device_id *,
void (CL_CALLBACK *)(const char *, const void *, size_t, void *),
void *, cl_int *) CL_API_SUFFIX__VERSION_1_0;
and a global pointer to this function is stored
clCreateContextFunctionPointerType clCreateContextFP = NULL;
which is then dedicatedly initialized (in a platform dependent way - e.g using GetProcAddress(libraryHandle, name);
on Windows) and used in the JNI implementation:
JNIEXPORT jobject JNICALL Java_org_jocl_CL_clCreateContextNative
(JNIEnv *env, jclass UNUSED(cls), jobject properties, jint num_devices, jobjectArray devices, jobject pfn_notify, jobject user_data, jintArray errcode_ret)
{
...
if (clCreateContextFP == NULL)
{
// throw...
}
....
nativeContext = (clCreateContextFP)(nativeProperties, nativeNum_devices, nativeDevices, nativePfn_notify, nativeUser_data, &nativeErrcode_ret);
The reason for these contortions is that, as far as I understood, you never know which functions are actually available in the „installed client“. Directly trying to link against these libraries (by refering to the „OpenCL.lib“) may end up with linker errors for undefined references (in this case, one of these function pointers would simply be null
, which can be checked at runtime).
Maybe it’s not an issue for JavaCPP at all, I was just curious whether you had to handle this in any way.
There are some further details that might be interesting (e.g. asynchronous operations and their interdependencies with the garbage collector), but… that post is long enough for now 