-
-
Notifications
You must be signed in to change notification settings - Fork 26k
feat: support Intel GPUs in Array API testing #31650
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@ogrisel - might be you will be interested in this |
Thanks for this! One question about the |
Great question, it truly stems from me not having done my homework right. I had included a branch for the case that a PyTorch < 2.4 was used for testing, where the later call to For the impact on sklearn: it will only introduce a lot of skipped tests as none of the runners (as far as I see) have any GPU hardware that would interface with integrated GPUs starting with Intel's Meteor Lake or any Intel discrete GPUs. This seems to also unfortunately be the case with the Apple PyTorch Metal branch (as shown on codecov). I'll add some screenshots of some functional array_api testing in our environments using an Intel Max GPU. |
Thanks for the explanations! Let's keep it the way the cupy nocover is. The main reason I asked is that I looked at the MPS clause above which had no "nocover" and that made me wonder why it was needed here. Fine for me to skip testing this in CI. It would have been nice to have it but not required. Getting "exotic" hardware for CI is hard work :D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @icfaust. Looking forward to seeing some CI runners with integrated GPUs.
Following on from #27098 which added support for Apple's Metal devices, this will add the ability to run scikit-learn's array_api testing using Intel GPUs via PyTorch (named XPU devices). This enables scikit-learn to test on discrete and integrated GPUs using the Xe architecture, which was made possible with PyTorch since release 2.4.
It will first skip if the PyTorch version does not have xpu interfaces (indicating PyTorch < 2.4), and secondly will skip if a device is unavailable.