ArcGIS Image Server provides a suite of deep learning tools to classify and detect objects in imagery. These tools allow you to generate training sample datasets and export them to a deep learning framework to develop a deep learning model. Then you can perform data inference workflows, such as image classification and object detection.
To take advantage of GPU processing on a multiple-machine raster analytics server site running Windows, at least one GPU must be available on each server node on the site. A GPU card is not required to run the deep learning tools on your raster analytics deployment of ArcGIS Image Server. If the raster analytics server machine does not have a GPU card, the tools can be run on the CPU. For the raster analytics server machine with only CPU, the users need to install MKL (Math Kernel Library) build of the deep learning Python libraries specifically for TensorFlow and Pytorch packages.
Raster analytics in an ArcGIS Image Server environment can use the TensorFlow, PyTorch, and Keras Python modules with GPUs. Utilization of multiple GPUs per server node is applicable to some deep learning model configurations predefined in ArcGIS. They include Tensorflow (ObjectDetectionAPI and DeepLab), Keras (MaskRCNN), and PyTorch.
Add support for deep learning to a Windows or Linux raster analytics deployment
Once you've configured ArcGIS Image Server and your raster analytics deployment, you need to install supported deep learning frameworks packages to work with the deep learning tools.
For instructions on how to install deep learning packages, see the Deep Learning Installation Guide for ArcGIS Image Server.
Starting in 10.8, multiple service instances can use the GPU on each server node. The maximum number of instances per machine of the RasterProcessingGPU service should be set based on the number of GPU cards installed and intended for deep learning computation on each machine, the default is set to 1.
Caution:Do not increase the maximum number of instances per machine for this service if there is only one GPU card per machine.
Verify the values for minimum and maximum number of instances in ArcGIS Server Manager. Navigate to Services > Manage Services > RasterProcessingGPU, then click on RasterProcessingGPU to go to the editing page. Under the Pooling tab and verify the values for minimum and maximum number of instances. The default minimum and maximum number of instances per machine is 1. To utilize multiple GPUs per machine, the maximum number of instances per machine should be modified to be equal to the number of GPU cards installed per machine. For example, if each server machine has two GPUs, the maximum number of instances per machine should be changed into 2. Click Save and Restart for the change to be effective.
The minimum number of instances per machine of the RasterProcessingGPU service is set to 1 as the default. If there is only one GPU card available for each server node, you may need to restart RasterProcessingGPU service if you want to run model inferences sequentially across different deep learning frameworks. For example, submit the first job for TensorFlow model inference, once it is finished, restart the RasterProcessingGPU service, then submit the second job for PyTorch model inference.
Each request in your deep learning raster analytics workflows includes a processorType environment parameter. Ensure that this parameter correctly specifies whether to use CPU or GPU when making requests. The processorType environment parameter is set in the tool or raster function interface in ArcGIS Pro, Map Viewer Classic, ArcGIS REST API, or ArcGIS API for Python.