ARTICLE AD BOX
I am training a series of ML models using AutoGluon's Mitra foundational model algorithms. When I set the fine_tune argument to True, the model keeps training with no end in sight. The model I am currently training has been training for almost a week. Conversely, when I train with fine_tune set to False, the model finishes training in a timely manner.
When I set fine_tune to True, I get the following message:
Attempting to fine-tune Mitra on CPU. This will be very slow. We strongly recommend using a GPU instance to fine-tune Mitra. UserWarning: 'pin_memory' argument is set as true but not supported on MPS now, then device pinned memory won't be used.
I am using a MacBook Pro M2, which has 16 GPU cores. So is there a way to make AutoGluon use the GPU cores that are available? I have tried to activate Apple's MPS backend, but it doesn't seem to work with AutoGluon.
