Core ML can run models on the CPU or on the GPU. For some models it makes more sense to use the CPU, but for (deep) neural networks the GPU is the tool of choice. Xcode comes with a GPU Frame Capture button that lets us inspect what the GPU is doing, and we can use this to spy on Core ML some more.
To enable GPU Frame Capture in a Core ML app, you need to add a few lines of code (for example, inside the main view controller):
import Metal
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
// in viewDidLoad():
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
This is enough to make Xcode enable the GPU Frame Capture button in the debugger toolbar:
Run the app, press the GPU Frame Capture button to start capturing, wait a second or so, and press the button again to stop. Of course, you actually need to be doing a Core ML prediction while the capture takes place
Now Xcode will show what the GPU was doing during the capture: