PyTorch ROCm: Unleashing the Power of Your Radeon GPU for Deep Learning
Surprisingly, the process is streamlined. Unlike Nvidia's CUDA with PyTorch, you don't need specific code to choose your Radeon GPU.
Assuming you have PyTorch ROCm installed correctly, use the following line in your Python code to assign computations to your AMD GPU:
device = torch.device('cuda')
This works because PyTorch ROCm is designed to automatically detect and use your Radeon GPU when 'cuda'
is specified for the device.
Here are some helpful resources to learn more:
- Discussion on selecting AMD GPU with PyTorch ROCm: [PyTorch ROCm AMD GPU selection ON Stack Overflow stackoverflow.com]
import torch
# Check if ROCm GPU is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Running on device: {device}")
# Create some data
X = torch.randn(100, 1, device=device)
y = 3 * X + 2 # Simple linear relationship
# Define the model (linear regression)
model = torch.nn.Linear(1, 1).to(device)
# Define loss function and optimizer
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# Train the model
for epoch in range(100):
# Forward pass
y_pred = model(X)
loss = criterion(y_pred, y)
# Backward pass
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Print final model weights
print(f"Final model weights: {model.weight.item()}")
However, there are some broader approaches you could consider if using an AMD GPU with PyTorch isn't feasible:
pytorch