Now Reading
Putting in PyTorch Geometric w.r.t. CUDA Model

Putting in PyTorch Geometric w.r.t. CUDA Model

2023-09-30 18:23:59

I’ve been fidgeting with PyTorch Geometric set up currently, and I consider everyone knows that though being an superior library, typically it may be notoriously arduous to get it to work within the first place. Particularly in case your CUDA model is method too low. That may be very, very problematic. So, let’s assume now we have an unlisted CUDA model (11.1 in my case,) and let’s attempt to make it work!

Putting in PyTorch

So the very very first thing is to put in PyTorch. And to put in PyTorch (the GPU-enabled model, clearly,) you’ll be able to comply with this helpful link on the official web site. If, nonetheless, our CUDA model is just too low and turns into unlisted (once more,) we are able to comply with this link and simply seek for our model. In our case (CUDA 11.1,) we see that the newest supported model is torch 1.10.1.

Installing PyTorch.

So we copy & paste that into our command line and BAM! PyTorch performed.

After that, attempt to import torch and confirm that we are able to certainly create variables utilizing cuda:

import torch
torch.randn(1, 1, system='cuda')

It is on device! Yay!

Wow! It’s on system! Let’s go!

Superior: Unsupported/Outdated OSes

For unsupported OSes, typically there isn’t any precompiled PyTorch wheels, and that calls for an entire compilation from supply. If such instances come up, it is rather vital for us to decide on one compiler all through your complete compilation (i.e. the compiler must be the identical for each PyTorch and PyTorch Geometric.) In case you are utilizing an archaic model of, say, Ubuntu, which ends up in an historical GCC and G++, I extremely advocate you to put in Clang as an alternative. For Ubuntu customers, you’ll be able to set up Clang utilizing llvm.sh, with a complete information out there here. The Clang model can’t be too new or too outdated, in any other case we are able to’t compile PyTorch Geometric. My really helpful model is clang-6.0, which works for me.

We are able to inform pip to make use of our new compiler as soon as now we have completed the set up:

export CC=clang-6.0
export CXX=clang++-6.0

# And now set up PyTorch
pip set up torch==...

Putting in PyTorch Geometric

PyTorch Geometric (PyG for brief) is extra like a toolkit with 4 main dependencies:

  • torch_scatter
  • torch_sparse
  • torch_cluster
  • torch_spline_conv
  • pyg_lib (???)

You probably have comparatively new PyTorch and CUDA put in, you’ll be able to simply comply with the information from its official website. In any other case, we must take the matter into our personal palms. Observe that on its official web site, there’s a discover hyperlink for particular PyTorch and CUDA variations:

The find link.

By copy-pasting it and changing the torch model and CUDA model with our model (PyTorch 1.10.1 + CUDA 11.1), we get https://data.pyg.org/whl/torch-1.10.1+cu111.html. And after accessing it we’re offered with an inventory of supported variations of PyG dependencies:

The versions.

After a fast look we are able to see that supported variations of PyG dependencies given our CUDA and PyTorch variations are:

  • torch_cluster: 1.5.9, 1.6.0
  • torch_scatter: 2.0.9
  • torch_sparse: 0.6.12, 0.6.13
  • torch_spline_conv: 1.2.1

Now we merely want to choose their latest variations (or not as new, it’s actually as much as you) and pip set up them.

pip set up torch_scatter==2.0.9 torch_sparse==0.6.13 torch_cluster==1.6.0 torch_spline_conv==1.2.1 -f https://knowledge.pyg.org/whl/torch-1.10.1+cu111.html
pip set up torch_geometric

WARNING! It’s essential that you simply specify the model right here, in any other case pip will utterly disregard the discover hyperlink and simply obtain the latest model of the 4 dependencies. You may need seen that pyg_lib is absent from the above checklist. I’ve seen that as effectively, however since my job then was to easily set up PyG, I disregarded the discrepency and no errors comes off it. I assume my PyTorch Geometric model is just too outdated for the pyg_lib factor.

See Also

Superior: PyG from Supply

If you’d like some new thrilling journey (or in case your OS is just too outdated, as said above,) you too can compile PyG from supply. The process is mainly the identical as above. Now we have to ensure the compiler stays the identical.

Nonetheless, if you’re utilizing Clang as suggested, you’ll encounter two errors.

  1. The linker will complain something about “libomp5-10”. I’m not certain about this, however this appears to be attributable to PyTorch’s wheel’s own OpenMP library clashes with system-wide OpenMP. This may be solved by specifying the OpenMP library: CFLAGS="-fopenmp=libiomp5" pip set up torch_cluster --no-cache
  2. Some weird template error at PyTorch’s variant.h. Funnily sufficient, I discovered the answer to each points on macOS gadgets with Clang attempting to put in PyG. This reply solves this concern.

After an extended wait, you need to be capable of construct all 4 (5?) dependencies from supply. After that, simply set up PyG (or construct it as effectively.)

Conclusion

That’s it! A grand journey of model wrangling, all because of the outdated CUDA model on the server facet. However I assume that’s why it’s enjoyable, proper? To confirm that PyG has been definitively efficiently put in, I attempt to import all 4 dependencies independently, and attempt to print out its CUDA model:

import torch_scatter
print(torch_scatter.torch.model.cuda)

# ... and so forth.

And that’s about it. Have enjoyable utilizing PyG!

Copyleft 2023 42yeah.

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top