+ Reply to Thread
Showing results 1 to 17 of 17

Thread: JCublas2 Matrix Multiplication Problem

  1. #1
    JunT
    Gast/Guest
    Hi,

    first off, great library! However I am currently facing issues when multiplying matrices that are quite big (>144x144).

    Some infos about my system:
    * Windows 7 64 bit
    * Java 7 64 Bit (1.7.0_03)
    * CUDA v4.1 (64bit) with nvidia driver 295.73 on a GTX580
    * JCUDA *0.4.1 (64bit)

    Here is my code: https://github.com/thomasjungblut/th...trixUtils.java

    Let me just explain it a bit to you.
    I am currently developing a collaborative filtering algorithm based on ml-class.org implementation. This involves some heavy multiplication of large matrices, so I guess CUDA/CUBLAS would be a bit of improvement.
    On CPU everything works as expected, but with CUDA on larger matrices I get problems.

    I have written a "test-case" in the main method were I am passing my own class (DenseDoubleMatrix, row major format aka a plain double[][]) filled with random doubles. To make it a bit more easier, I passed quadratic matrices.
    The whole CUDA stuff works like this: (how I think it should)
    • transform my matrix into column major format in a single double array
    • I am going to allocate device memory for input matrix A.
    • use JCublas2.cublasSetMatrix(....) to "write" it to the device memory.
    • do the same stuff with the other matrix
    • allocate device memory for the output matrix
    • call cublasDgemm with the parameters it needs and synchronize the device
    • retrieve the result matrix with cublasGetMatrix and unfold it back to my own matrix class


    This works fine. So I don't think there is a major problem, however when I increase the size of the matrix I get sporadic NaN values in the output matrix.
    The sad thing is, that no error is thrown.

    I execute a multiplication on CPU with the standard 3-loop method and then execute with JCublas.
    It is normal that there are some rounding errors within the solution of the CPU and the GPU, so I just took the a difference of both matrices and sum the absolute element values.
    Small differences are not a big deal for me, however NaN's are serious problems.

    I have prepared you a sample output: (// are comments)
    Java Code:
    1. Using device GeForce GTX 580 with total RAM of 1610612736 bytes!
    2. // caught the device an error?
    3. no error
    4. // using 2x2 results in difference of 0.0
    5. 2 0.0
    6. no error
    7. 3 3.3306690738754696E-16
    8. no error
    9. 4 2.7755575615628914E-16
    10. no error
    11. // no problems so long..
    12. 143 2.63792543364616E-10
    13. no error
    14. 144 2.6637181349542516E-10
    15. no error
    16. // BAM not working anymore
    17. 145 NaN
    18. no error
    19. 146 NaN
    20. no error
    21. 147 NaN
    22. no error
    23. 148 NaN
    24. no error
    25. 149 NaN
    26. no error
    27. 150 NaN
    28. no error
    29. 151 NaN
    30. no error
    31. 152 NaN
    32. no error
    33. 153 NaN
    34. no error
    35. 154 NaN
    36. no error
    37. 155 NaN
    38. // rest NaN errors omitted...

    The worst thing is that it is not always breaking at 144x144, sometimes just at 244x244 or even at 312x312.
    And as you can see, there are no errors recorded.
    Debugging through the matrices reveals that there are only small and arbitrary amounts of elements NaN.

    Do I have a hardware failure or am I missing something obvious?

    Thanks, if you need additional information I be glad to hand them to you.

  2. #2
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    Hello

    Nothing obvious here. I just tried it, I had to convert all occurances of Double/DOUBLE/double to Float/FLOAT/float respectively (because I'm still on an old GeForce 8800 here), and cublasDgemm to cublasSgemm, and at let it run through all sizes between 2 and 660, and finally tested it with 2000x2000, and did not receive a single NaN. I can try to run another test on a newer card with Double support as soon as I have the chance.

    If you have a Visual Studio installed, you may want to adopt the simpleCUBLAS sample from NVIDIA to perform a similar test.

    According to the output that you posted, it always spill out NaNs after it happened once. Is this true? (Or does it happen that it works correctly later?). A heat problem could explain the observation to some extent (especially if the GPU is overclocked). It might be the case that the card heats up during the computation of the first matrices, and at an unspecified point it is so hot that it causes errors. Just as a test: You mentioned that the error sometimes occurs at matrix sizes of >300. What happens when you run only a single test, with a size of 500x500 (that would "certainly" have caused an error otherwise) ? There are also tools for monitoring the GPU temperature. But of course, these are just guesses, because it might explain the symptoms...


    bye
    Marco

  3. #3
    JunT
    Gast/Guest
    Hi Marco,

    these are very good points!

    If you have a Visual Studio installed, you may want to adopt the simpleCUBLAS sample from NVIDIA to perform a similar test.
    I have access to MSDNAA or what it is called now, I can download it. I try the simpleCUBLAS later this week then. To test this a few days ago I used the precompiled binaries shipping with the sample and it worked correctly.

    Is this true? (Or does it happen that it works correctly later?)
    This is actually funny, I thought that this will be a persistent state, but actually after a few matrix multiplications resulting in NaN it seems to recover.

    Java Code:
    1. Using device GeForce GTX 580 with total RAM of 1610612736 bytes!
    2. no error
    3. 2 0.0
    4. no error
    5. 3 2.220446049250313E-16
    6. no error
    7. 4 6.661338147750939E-16
    8. no error
    9. // fine so far
    10. 206 9.311307280768233E-10
    11. no error
    12. 207 9.489724561717594E-10
    13. no error
    14. 208 9.606537787476555E-10
    15. no error
    16. 209 NaN
    17. no error
    18. 210 NaN
    19. no error
    20. 211 NaN
    21. no error
    22. // NaN still persists
    23. 247 NaN
    24. no error
    25. 248 NaN
    26. no error
    27. 249 NaN
    28. no error
    29. 250 NaN
    30. no error
    31. 251 NaN
    32. no error
    33. 252 NaN
    34. no error
    35. 253 NaN
    36. no error
    37. 254 NaN
    38. no error
    39. 255 NaN
    40. no error
    41. 256 NaN
    42. no error
    43. 257 2.0115464849368436E-9
    44. no error
    45. 258 2.0368560171846184E-9
    46. no error
    47. 259 2.0725607896565634E-9
    48. no error
    49. 260 2.0876029793726048E-9
    50. no error
    51. // no error until 300

    What happens when you run only a single test, with a size of 500x500 (that would "certainly" have caused an error otherwise) ?
    A single test seems not to be a problem. However if I run multiple tests (300) I get NaN results between iteration 160 and 245, afterwards it is going to work well.

    When running a test from 150 to 3000 quadratic matrix sizes the error sporadically happens between ranges.

    A heat problem could explain the observation to some extent (especially if the GPU is overclocked).
    Yep, that was what I first thought as well. I am using a EVGA 580 SC (superclocked) it has slightly higher clocks

    card / clock / memory / shader
    580SC / 797/2025/1594
    580 / 772/2004/1544

    However I have no temperature problem. It is still running between 40°C and 60°C all the time, this is normal temperature and the fan keeps under 50%. I verified this thoughout 3 tools including EVGA Precision, RMClock and NVIDIA Inspector.

    VRAM Usage seems to be increasing correctly according to the allocation. GPU Usage shows that the card remains quite IDLE 1-2%.

    Do you think that I can DOS (in terms of Denial of Service) my graphic card?

    Okay so I guess the next things to consider are:
    • use c/c++ code to verify that the problem is not related with java/jni/jcuda
    • clock my card back to normal clocks
    • exchange doubles with floats maybe this is a bug in their double implementation


    A voltage drop can be also causing the problem, but this seems to be very unlikely bceause I would have seen this on my monitoring tools.

    Thanks Marco for your ideas, maybe you still have others. I try to verify the things throughout the week and tell you my results.

  4. #4
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    OK, I'll try to run a test with Double values as soon as possible, to see whether I can reproduce the error.

  5. #5
    JunT
    Gast/Guest
    I just replaced floats with doubles and am currently processing matrices from 2x2 to 2000x2000 without problems of NaN.
    So I guess this seems to be a double problem.

    I try to verify this by C code then.

  6. #6
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    I just ran a quick test on a GTX 560 Ti with 'double', and it also produces NaNs after a while (although later, at sizes of >500). I think it's unlikely that it is related to JCuda, but you never know... The behavior is really strange, and I can not think of a profound explaination. It will be interesting to see whether the simpleCUBLAS example shows a similar behavior.

    I also did some websearches yesterday, but did not find any hints that there may be a bug. The release notes of CUDA 4.0 contained an entry
    * In the previous release of the CUBLAS Library, the cublasDgemm() routine produced incorrect results in some cases when k < 32 and matrix A is transposed. This has been fixed in this release.
    ( http://developer.download.nvidia.com...ease_Notes.txt )
    But this is rather unspecific, and seems not to be related to this error...

  7. #7
    JunT
    Gast/Guest
    Great that you have the same problem^^

    I think this is unrelated to JCuda as well, since (as far as I have seen it) just uses JNI to speak with the libraries.

    I guess we should transfer this issue to the nvidia forums.

  8. #8
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    It's not unlikely that at the NVIDIA forum, the first guess will be that it is related to JCuda Did you have the chance to adopt the simpleCUBLAS sample, so that the request at the NVIDIA forum may be supported with a compileable and testable sample? (I can try to set up such a sample, but I don't always have access to a GPU with double support...)

  9. #9
    JunT
    Gast/Guest
    Yeah I'm downloading visual studio right now.

    I've seen a oracle forum discussion about JNI and double NaN's
    https://forums.oracle.com/forums/thr...readID=2227338

    Do you think there is a problem converting from double to jdouble in jni code?

  10. #10
    JunT
    Gast/Guest
    Hey Marco,

    I have verified that cublas is working with doubles without problems in c++ with 2x2 to 2000x2000 matrices.

    I have uploaded you my visual studio project and binary here:

    http://people.apache.org/~tjungblut/...ads/cublas.zip

    You find the binary in x64\Release\cublas.exe

    Hope you can verify this as well if you have access to the fermi card.

  11. #11
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    OK, I ran another short test, and can confirm that it does not seem to have this error in the native simpleCUBLAS example. I changed it to use double, and basically converted the 'main' into a test run where the matrix size 'N' is increased in a loop. I also changed the PointerMode to 'device', just to be sure that it does (as far as possible) the same as the JCublas example. (BTW: alpha and beta are not freed in your version)

    On the one hand: Thank you for pointing out this bug. On the other hand: Sigh I don't even have the slightest Idea what might be the reason for this, have never before encountered such an error, have only limited possibilities for testing it with 'double' values at all, and debugging this will be hard because it does not really occur in a reproducable setup...

    I only did another quick test with the JCublas version, and changed the PointerMode from 'device' to 'host'. In this case, it ran up to a size of 1646 without an error (then it ran out of memory). This might be an indication that it might be related to the PointerMode, but I'll have to invest more time for this and see whether I can find out what's wrong there...

  12. #12
    JunT
    Gast/Guest
    Yeah, that is bad.

    Well I can help you though. How did you compile these .dll's? (JCublas2-windows-x86_64.dll)
    Using the cygwin gcc or mingw?

    I only did another quick test with the JCublas version, and changed the PointerMode from 'device' to 'host'. In this case, it ran up to a size of 1646 without an error (then it ran out of memory).
    Can you show me what I have to change to verify this? When setting the handle to

    Java Code:
    1.     cublasHandle handle = new cublasHandle();
    2.     JCublas2.cublasCreate(handle);
    3.     JCublas2.cublasSetPointerMode(handle,
    4.         cublasPointerMode.CUBLAS_POINTER_MODE_HOST);

    I have an problematic frame problem:

    # JRE version: 7.0_03-b05
    # Java VM: Java HotSpot(TM) 64-Bit Server VM (22.1-b02 mixed mode windows-amd64 compressed oops)
    # Problematic frame:
    # C [cublas64_41_28.dll+0x8eff9]
    Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
    j jcuda.jcublas.JCublas2.cublasDgemmNative(Ljcuda/jcublas/cublasHandle;IIIIILjcuda/Pointer;Ljcuda/Pointer;ILjcuda/Pointer;ILjcuda/Pointer;Ljcuda/Pointer;I)I+0
    j jcuda.jcublas.JCublas2.cublasDgemm(Ljcuda/jcublas/cublasHandle;IIIIILjcuda/Pointer;Ljcuda/Pointer;ILjcuda/Pointer;ILjcuda/Pointer;Ljcuda/Pointer;I)I+24
    j de.jungblut.math.cuda.JCUDAMatrixUtils.multiply(Lde/jungblut/math/DenseDoubleMatrix;Lde/jungblut/math/DenseDoubleMatrixLde/jungblut/math/DenseDoubleMatrix;+174
    j de.jungblut.math.cuda.JCUDAMatrixUtils.main([Ljava/lang/StringV+48
    v ~StubRoutines::call_stub
    (BTW: alpha and beta are not freed in your version)
    Oops, sorry. Hope to not leaked your memory with 4000 doubles

  13. #13
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    The DLLs are compiled with Visual Studio. I'll have to check whether there are compile options that may influence this behavior, but as long as I don't have an idea where the error might come from, it's just helpless guessing...

    When switching to HOST pointer mode, alpha and beta have to be host pointers:
    Java Code:
    1.  
    2. Pointer alpha = Pointer.to(new double[]{ 1.0 });
    3. Pointer beta = Pointer.to(new double[]{ 1.0 });
    4. cublasDgemm(...alpha, ... beta...);
    But even if it really worked (for you as well) this would not explain the behavior.

    I think I'll have to take a vacation in order to have enough time to go though all the options. The first thing would be to see where the errors occur in the matrix. During my tests, I did not compare the result to the host version, but instead, only checked for 'NaN' values in the matrix (which is much faster, of course) and it seemed that when the error occurred, it always was at position (0,0). Another thing I wanted to test is whether the memory really reached the GPU properly (by copying it back and comparing it to the input), but I never have seen errors with memory copies, so it's unlikely that there is a problem (and it would not explain why it only happens with 'double' values).

  14. #14
    JunT
    Gast/Guest
    When switching to HOST pointer mode, alpha and beta have to be host pointers:
    Yep that fixes it, thanks for the explanation

    That is a really strange problem, if you need help to verify a bit, tell me.

  15. #15
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    Did it also work for you with Host pointers?

  16. #16
    JunT
    Gast/Guest
    Yes, it worked.

  17. #17
    Global Moderator Viertel Gigabyte
    Join Date
    05.08.2008
    Professional Posts
    4,963
    Mentioned
    324 Post(s)
    OK, it's becoming increasingly confusing. I just tried to create a minimal sample to reproduce the error, in order to file a bug report, and in this example, the error does not seem to occur any more:
    Java Code:
    1.  
    2. package jcuda.bugs;
    3.  
    4.  
    5. import static jcuda.jcublas.JCublas2.cublasCreate;
    6. import static jcuda.jcublas.JCublas2.cublasDestroy;
    7. import static jcuda.jcublas.JCublas2.cublasDgemm;
    8. import static jcuda.jcublas.JCublas2.cublasGetVector;
    9. import static jcuda.jcublas.JCublas2.cublasSetPointerMode;
    10. import static jcuda.jcublas.JCublas2.cublasSetVector;
    11. import static jcuda.jcublas.cublasOperation.CUBLAS_OP_N;
    12. import static jcuda.jcublas.cublasPointerMode.CUBLAS_POINTER_MODE_DEVICE;
    13. import static jcuda.runtime.JCuda.cudaFree;
    14. import static jcuda.runtime.JCuda.cudaMalloc;
    15. import static jcuda.runtime.JCuda.cudaMemcpy;
    16. import static jcuda.runtime.cudaMemcpyKind.cudaMemcpyHostToDevice;
    17.  
    18. import java.util.Random;
    19.  
    20. import jcuda.Pointer;
    21. import jcuda.Sizeof;
    22. import jcuda.jcublas.JCublas2;
    23. import jcuda.jcublas.cublasHandle;
    24. import jcuda.runtime.JCuda;
    25.  
    26. public class Bug1_JCublasDgemmNaN {
    27.  
    28.     static boolean runTest(int N)
    29.     {
    30.         JCublas2.setExceptionsEnabled(true);
    31.         JCuda.setExceptionsEnabled(true);
    32.        
    33.         double h_A[];
    34.         double h_B[];
    35.         double h_C[];
    36.         Pointer d_A = new Pointer();
    37.         Pointer d_B = new Pointer();
    38.         Pointer d_C = new Pointer();
    39.         Pointer alpha = new Pointer();
    40.         Pointer beta = new Pointer();
    41.         int n2 = N * N;
    42.         int i;
    43.         cublasHandle handle = new cublasHandle();
    44.  
    45.         System.out.printf("CUBLAS test %d running...\n", N);
    46.  
    47.         cublasCreate(handle);
    48.        
    49.         h_A = new double[n2];
    50.         h_B = new double[n2];
    51.         h_C = new double[n2];
    52.         Random random = new Random(0);
    53.         for (i = 0; i < n2; i++) {
    54.             h_A[i] = random.nextDouble();
    55.             h_B[i] = random.nextDouble();
    56.             h_C[i] = random.nextDouble();
    57.         }
    58.  
    59.         cudaMalloc(d_A, n2 * Sizeof.DOUBLE);
    60.         cudaMalloc(d_B, n2 * Sizeof.DOUBLE);
    61.         cudaMalloc(d_C, n2 * Sizeof.DOUBLE);
    62.        
    63.         cublasSetVector(n2, Sizeof.DOUBLE, Pointer.to(h_A), 1, d_A, 1);
    64.         cublasSetVector(n2, Sizeof.DOUBLE, Pointer.to(h_B), 1, d_B, 1);
    65.         cublasSetVector(n2, Sizeof.DOUBLE, Pointer.to(h_C), 1, d_C, 1);
    66.  
    67.         cublasSetPointerMode(handle, CUBLAS_POINTER_MODE_DEVICE);
    68.  
    69.         cudaMalloc(alpha, 1 * Sizeof.DOUBLE);
    70.         cudaMemcpy(alpha, Pointer.to(new double[]{1.0}), 1*Sizeof.DOUBLE, cudaMemcpyHostToDevice);
    71.  
    72.         cudaMalloc(beta, 1 * Sizeof.DOUBLE);
    73.         cudaMemcpy(beta, Pointer.to(new double[]{0.0}), 1*Sizeof.DOUBLE, cudaMemcpyHostToDevice);
    74.  
    75.         cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, N, N, N, alpha, d_A, N, d_B, N, beta, d_C, N);
    76.        
    77.         h_C = new double[n2];
    78.         cublasGetVector(n2, Sizeof.DOUBLE, d_C, 1, Pointer.to(h_C), 1);
    79.  
    80.         boolean result = true;
    81.         for (i=0; i<N*N; i++)
    82.         {
    83.             if (Double.isNaN(h_C[i]))
    84.             {
    85.                 System.out.printf("Error for size %d at %d\n", N, i);
    86.                 result = false;
    87.                 break;
    88.             }
    89.         }
    90.  
    91.         cudaFree(d_A);
    92.         cudaFree(d_B);
    93.         cudaFree(d_C);
    94.         cudaFree(alpha);
    95.         cudaFree(beta);
    96.  
    97.         cublasDestroy(handle);
    98.         return result;
    99.     }
    100.  
    101.  
    102.     public static void main(String args[])
    103.     {    
    104.         for (int i=2; i<2000; i++)
    105.         {
    106.             boolean result = runTest(i);
    107.             if (result != true)
    108.             {
    109.                 break;
    110.             }
    111.         }
    112.     }
    113.  
    114. }

    The 1:1 corresponding C implementation, just for reference:

    C Code:


    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <math.h>
    #include <cmath>
    #include <cuda_runtime.h>
    #include <cublas_v2.h>

    static int runTest(int N)
    {
    cublasStatus_t status;
    double* h_A;
    double* h_B;
    double* h_C;
    double* d_A = 0;
    double* d_B = 0;
    double* d_C = 0;
    double *alpha;
    double *beta;
    int n2 = N * N;
    int i;
    cublasHandle_t handle;

    printf("CUBLAS test %d running...\n", N);

    cublasCreate(&handle);

    h_A = (double*)malloc(n2 * sizeof(h_A[0]));
    h_B = (double*)malloc(n2 * sizeof(h_B[0]));
    h_C = (double*)malloc(n2 * sizeof(h_C[0]));
    for (i = 0; i < n2; i++) {
    h_A[i] = rand() / (double)RAND_MAX;
    h_B[i] = rand() / (double)RAND_MAX;
    h_C[i] = rand() / (double)RAND_MAX;
    }

    cudaMalloc((void**)&d_A, n2 * sizeof(d_A[0]));
    cudaMalloc((void**)&d_B, n2 * sizeof(d_B[0]));
    cudaMalloc((void**)&d_C, n2 * sizeof(d_C[0]));

    cublasSetVector(n2, sizeof(h_A[0]), h_A, 1, d_A, 1);
    cublasSetVector(n2, sizeof(h_B[0]), h_B, 1, d_B, 1);
    cublasSetVector(n2, sizeof(h_C[0]), h_C, 1, d_C, 1);

    cublasSetPointerMode(handle, CUBLAS_POINTER_MODE_DEVICE);

    double alphaValue = 1.0;
    cudaMalloc((void**)&alpha, 1 * sizeof(double));
    cudaMemcpy(alpha, &alphaValue, 1*sizeof(double), cudaMemcpyHostToDevice);

    double betaValue = 0.0;
    cudaMalloc((void**)&beta, 1 * sizeof(double));
    cudaMemcpy(beta, &betaValue, 1*sizeof(double), cudaMemcpyHostToDevice);

    status = cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, N, N, N, alpha, d_A, N, d_B, N, beta, d_C, N);
    if (status != CUBLAS_STATUS_SUCCESS) {
    fprintf (stderr, "!!!! kernel execution error.\n");
    return EXIT_FAILURE;
    }

    h_C = (double*)malloc(n2 * sizeof(h_C[0]));
    cublasGetVector(n2, sizeof(h_C[0]), d_C, 1, h_C, 1);

    int result = EXIT_SUCCESS;
    for (i=0; i<N*N; i++)
    {
    if (h_C[i] != h_C[i])
    {
    printf("Error for size %d at %d\n", N, i);
    result = EXIT_FAILURE;
    break;
    }
    }

    /* Memory clean up */
    free(h_A);
    free(h_B);
    free(h_C);
    cudaFree(d_A);
    cudaFree(d_B);
    cudaFree(d_C);
    cudaFree(alpha);
    cudaFree(beta);

    cublasDestroy(handle);
    return result;
    }


    int main(int argc, char** argv)
    {
    for (int i=2; i<2000; i++)
    {
    int result = runTest(i);
    if (result != EXIT_SUCCESS)
    {
    break;
    }
    }
    }

    Can you confirm that the Java example works without producing NaNs? (This would at least give a hint how the bug may be tracked - namely by carefully looking for any possible difference to the original program where the problem sitll occurred...)

+ Reply to Thread Mark thread as "solved"

Quick Reply Quick Reply

Click here to log in


Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. Matrix products
    By Unregistered in forum JCuda
    Replies: 16
    Last Post: 26.05.2011, 14:01
  2. Replies: 4
    Last Post: 13.03.2011, 14:02
  3. Matrix Multply invalid Value
    By Unregistriert in forum JCuda
    Replies: 8
    Last Post: 08.03.2011, 09:09
  4. Matrix Multiplication with KernelLauncher
    By wepawetmose in forum JCuda
    Replies: 31
    Last Post: 08.09.2010, 21:57
  5. Matrix, Crawler und Verknüpfung
    By Unregistriert in forum Hausaufgaben
    Replies: 9
    Last Post: 31.01.2010, 13:34

Posting Rules

  • You may post new threads
  • You may post replies
  • You may not post attachments
  • You may not edit your posts
  •