NVIDIA NVBIO for Bioinformatics and Statistics

Hi everyone.

I found a nice library that everyone will like.

Link: https://developer.nvidia.com/nvbio

It is call NVBIO for Bioinformatics and Statistics.

The interesting stuff about it is that it provides a frame work for File IO functions that work both in CPU host and device GPU.

For those interested in speeding up your file io operations within a cuda frame work.

Anyhow, I just came across this library. I have never ran a make file, or even a cmakefile.

Does anyone have a clue in how I can compile this project so I can use its library with my projects.

I trying to solve a very difficult problem, and I need all the help that I can get.

I would post directly to their forum, but evidently there is no way to directly make a post because they use Google Groups and I don’t know how to post directly to their Google forum group.

Thank you.

Yes, there is an increasing number of libraries that either use CUDA directly, or are built upon the CUDA runtime libraries like CUBLAS or CUSPARSE.

However, it should be clear that there are not Java bindings for all of these libraries. NVBIO could be an interesting one, but so is cuDNN and others.

So if your goal is to use this library from Java, then compiling it won’t be of much help. Anyhow, IF they properly set up their CMake files, the process of compiling it should be rather simple:

  • Download it to some directory like C:\NVBIO
  • Start CMake-gui, and point it to the C:\NVBIO directory
  • Set the output path, for example, C:\NVBIO.build
  • Press „Configure“
  • Press „Generate“
  • Open the resulting Visual Studio files in the „C:\NVBIO.build“ directory, and compile the solution

Maybe I’ll have another look at the file IO part, but … the summary says

  • File IO routines for many common file formats (read data, aligned reads, genome data)
    – IO routines are implemented so that they can pass data efficiently to and from the GPU

This does not tell much about how it is actually implemented, but I think the key point is that the data is read and then passed to the GPU (efficiently … of course ;-)). There are options for this in Java/JCuda as well, but this should not be interpreted as „reading files from a kernel“. Instead, they’ll likely use some page-locked host memory for efficient data transfer (in Java, one could do similar things).