BIBC2025 workshop - reticulate basics
RSFAS, ANU
reticulate basicsreticulateAlmost all modern deep learning frameworks and most computer vision tools are implemented in Python.
reticulate is an R package that lets you call Python code seamlessly from R.
Please run scripts/setup.R to set up reticulate and Python.
The script will:
reticulateMiniconda, a Python environment managerconda environment (similar to an R project, but with associated R packages)conda environmentIf you have any installation issues, please let me know.
conda environmentAll the required packages and libraries are already installed in the ibsar-cv-workshop conda environment.
We can instruct reticulate to use this environment with the following code:
To verify that reticulate is correctly linked to the environment, run:
In Python, mutable objects can be changed after creation, while immutable objects cannot be modified once created.
Immutable:
11.0True'abc'(1, 2, 3)Mutable:
[1, 2, 3]{'a': 1, 'b': 2}array([[1, 2, 3], [4, 5, 6]], dtype=int32)Named list
{'a': 1.0, 'b': 2.0}
{'speed': [4.0, 4.0, 7.0, 7.0, 8.0, 9.0], 'dist': [2.0, 10.0, 4.0, 22.0, 16.0, 10.0]}
Array
Python list will be simplified to atomic vector when possible.
torch tensorImport torch
Create torch tensor
torch$float32, torch$int32, …)"cpu", "cuda", "mps", …)torch$accelerator$current_accelerator()torch tensor operationstensor([[2., 6.],
[4., 8.]])
tensor([[ 1., 9.],
[ 4., 16.]])
tensor([[ 7., 15.],
[10., 22.]])
tensor(2.5000)
Many other operations can be performed on tensors, check out the documentation.
torch tensor indexingPython uses 0-based indexing, and torch indexing syntax is quite similar to that of R.
x[0] # First element along the first dimension
x[0, 0] # First element of the first dimension,
# first element of the second dimension
x[, 0] # First element along the second dimension
x[, -1] # Last element along the second dimension
x[0:2] # First two elements along the first dimension
x[0:-1] # All elements along the first dimension except the last
x[0] <- torch$tensor(5:6) # Replace the first element along the first
# dimension with (5, 6)torch tensor back to R arrayUnfortunately, safely converting a torch tensor back to an R array can be quite complicated, especially when the tensor resides on the GPU or is part of a computation graph.
It’s generally best to keep it as a torch tensor throughout your workflow whenever possible.
Best practice:
Define y = \sum_{i=1}^n x_i^2 + x_i.
tensor(8., grad_fn=<SumBackward0>)
Compute \left(\frac{\partial y}{\partial x_1}, \dots , \frac{\partial y}{\partial x_n}\right), which is (2x_1 + 1, \dots, 2x_n + 1 )
Generate a random tensor x and another random tensor e, each of shape (50), using torch$rand(). Use torch$float32 as the data type and enable gradient tracking.
Compute y as: \boldsymbol{y} = 1 + 3 \boldsymbol{x} + \boldsymbol{e}
Compute y_hat as: \hat{\boldsymbol{y}} = 2 + 2 \boldsymbol{x}
Compute the loss as: \text{loss} = \frac{1}{50} \sum_{i=1}^{50} (y_i - \hat{y}_i)^2
Compute the gradient of loss with respect to x.

Slides URL: https://ibsar-cv-workshop.patrickli.org/ | Canberra time