Use JCufft with a file audio

Hi Marco. I’d like to know if it’s possible to use the library JCufft to parallelize the execution of a file audio (.wav). If is possible, could you give me a simple example of how to do ? The JCufftSample “only” performs a 1D C2C transform. Please help me.


Sorry, I’m not so familiar with the applications of FFT in general. If you have more specific information about what you are going to compute, I can try to help you, but can not promise anything…


(BTW: I moved this into a new thread, since it might be interesting for others as well, but was not related to the thread where it was posted originally)

Ok Marco. I just want to know if it’s possible to use jcuda (not only with jcufft) to parallelize the execution of a file audio (or video, like the last internet browser). This because i want to demonstrate the power of jcuda, not only with “normal” data. I hope you understand what i want. Thanks in advance.


Well, I’m not completely sure what you mean. I think there is no need to parallelize pure audio playback. But one could consider, for example, “distorting” or otherwise manipulating audio data with CUDA during playback. For example, it might (at least theoretically) possible to change the frequences inside an audio file while it is played.

More obvious may be applications in video processing. There already is which allows general image processing. You could, for example, imagine “blurring” a video, or performing an edge detection inside a video while it is played. There are also CUDA libraries for general H264 video decoding.

But all this is certainly far from trivial, and I’m not sure in how far this is applicable in a Java environment. (Although I already considered applying some simple edge detection to a video that is played with JMF, the Java Media Framework, but did not yet have the chance to really give it a try…)



Well, I’m not completely sure what you mean. I think there is no need to parallelize pure audio playback. But one could consider, for example, “distorting” or otherwise manipulating audio data with CUDA during playback. For example, it might (at least theoretically) possible to change the frequences inside an audio file while it is played.[/QUOTE]

Could you give me an example for this ? I have an audio file in .wav: how (and what) can i do ?


The remark „(at least theoretically)“ should imply that I think that it might be possible, but I don’t have a specific idea how :wink: I also mentioned that it will not be trivial. One starting point could be the documentation of the Sampled Audio package. When you already have a simple program that (somehow) reads a WAV into a byte[] array, and that is capable of playing the contents of a byte[] array like a WAV file, then it might be possible to „hook in“ at this point and somehow manipulate the sound data.
Maybe I can try to create an example for that (I always wanted to do that, but did not yet have the time), but I cannot promise whether (or even when) I can do this…


Ok Marco. Thanks for the reply but i just want to do a little example of some kind of parallelization using jcuda that is not present in yours examples (like a sort with jcudpp or something with jcurand or something that is “visible” like the or Any help would be appreciated. Thanks.

Hi Marco. I’m not capable to run the example: i download many things (Git-1.7.4-preview20110204, mingw-w64-v1.0-snapshot-20110505.tar, sgothel-jogl-v2.0-rc2-226-g2316340, sgothel-gluegen-v2.0-rc2-23-g4fd04fd) but still i’m not capable to run it.
“In order to compile and run this sample, you will have to download JOGL from”: yes but what exactly ? I use Windows XP Professional 32 bit with NetBeans IDE 6.9.
This is the result when i try to run the project:

Caused by: java.lang.RuntimeException: Uncompilable source code - cannot find symbol
symbol: class GLEventListener
at jcudadriverglsample3.Main.(
Could not find the main class: jcudadriverglsample3.Main. Program will exit.
Exception in thread “main” Java Result: 1

Please help me.


Well, this sample is not even remotely related to audio processing or so. Maybe the basic „JCudaDriverCubinSample“ would be a better starting point, or, concerning the initial question, maybe the „JCufftSample“…?

But anyhow: The current version of JOGL should be available on … well, I hope so, at least… I guess nobody has the slightest clue what all these files are… :confused: But this directory contains a file (SevenZIP file, can be opened with SevenZIP or WinRAR). Out of this archive you need


(Do not unpack all the files, because it will not work properly then!). The JAR files should be added to the classpath.


10x very much.

Does this mean that it works now? (There had been some Problem reports about this example in another thread, that’s why this is particularly interesting at the moment…)

Yes. It works but now i have another question: it is possible to use JCufft to do a FFT of an audio file ? From the audio file to the frequency for example ?

Good to hear that. On the other hand, I still have to track the error from the other thread…

Concerning the FFT, I already mentioned…

If it’s urgent, I could try to raise the priority of this task in my „todo“ list, but I can’t promise anything.


Hi Marco. I’ve found this thread on the nvidia’s forum. Maybe it’s possible to do something like that in JCuda using JCufft ?

Of course. Something like this is done in the JCufftSample from the website - it’s a simple and straightforward application of CUFFT. The challenging part is to properly integrate this into a Java program with WAV files and all that… but I’ll try to see what I can do after I finished the update to CUDA 4.0 RC2.

Ok thanks Marco. It’s enough urgent but the release of the CUDA 4.0 it’s much more. I try to do something. Bye.

Hi Marco. I’ve found this. Could interest ?

Yes, there are plenty of examples and tutorials. You may try doing it on your own. In any case, I’ll try it as well sooner or later.

Ok. Thanks Marco.

Well, I played around a little. The basic process of reading files, manipulating the data, and playing the manipulated data is fairly easy. But admittedly, I don’t have the slightest clue about the “acoustic meaning” of an FFT. I tried, for example, to modify the frequencies in a spoken text, by simply doing a forward FFT, multiplying/shifting the result, and transforming this back, but one certainly has to be deeper involved to achieve a specific effect there… (I got lots of wierd noises, interesting, but hardly useful as long as these are “random” results…). Hm. Well, I still don’t know what this is all about, anyhow.