Exploring Use Cases of Core ML Tools | by Anupam Chugh

Evaluation, transformations, updatable models, and more

Photo by Elena Rouame on Unsplash

Apple’s Core ML is a powerful machine learning framework with an easy-to-use drag-and-drop interface. And the latest iteration, Core ML 3, brought in lots of new layers and gave rise to updatable models.

With the release of so many features, one thing that often gets sidelined is the things you can do with a model outside of Xcode. There’s a lot of functionality for fine-tuning, customizations, and model testing even before you deploy a Core ML model in your applications.

Using the coremltools Python package, you can not only convert models but also use the utility classes for debugging layers, modifying feature shapes, setting hyperparameters, and even running predictions.

With the advent of coremltools 3.0around 100 more layers have been, in comparison to Core ML 2. Also, it’s now possible to mark layers as updatable to allow for on-device training.

In the following sections, we’ll be walking through the different use cases and scenarios where coremltools is handy for us and our ML model.

Before we get started, go ahead and install coremltools 3.0 using the following command:

pip install -U coremltools

Core ML Tools provides converters to convert models from popular machine learning libraries such as Keras, Caffe, scikit-learn, LIBSVM, and XGBoost to Core ML.

Additionally, onnx-coreml and tf-coreml neural network converters are built on top of coremltools.

tf-coreml requires setting a minimum deployment target flag in the convert function. This is because the under-the-hood implementation for iOS 13 models is different from the older versions.

For iOS 13 and above, node names need to be passed—instead of tensor shapes—in the parameter input_name_shape_dict.

The following code snippet showcases how to convert a Keras model to Core ML:

Using the Python script shown above, we can do a variety of things, such as changing input and output names, preprocessing, etc.

The image_input_names argument indicates that the input type can be considered as an image. Otherwise, by default, inputs are created as multi-dimensional arrays by Core ML.

image_scale is used to specify the values ​​by which the input is scaled. The pixels get multiplied by that number. This argument is applicable only if image_input_names was set.

red_bias, green_bias, blue_biasand gray_bias: These values ​​add the R, G, B, or grayscale color values ​​to the pixels.

For classifier models, we can pass an argument class_labels with an array or file containing the class labels, which are mapped to the neural network’s output indexes.

For cases, where you have a Core ML model in hand, but not in the desired input constraints, coremltools is a handy utility. It not only allows resizing the input and output, but it also lets you change the types. For example, if you need to convert an input type that’s MLMultiArray to an image type with a certain color space, the following piece of code does that for you:

import coremltools
import coremltools.proto.FeatureTypes_pb2 as ft

spec = coremltools.utils.load_spec("OldModel.mlmodel")

input = spec.description.input[0]
input.type.imageType.colorSpace = ft.ImageFeatureType.RGB
input.type.imageType.height = 224
input.type.imageType.width = 224

coremltools.utils.save_spec(spec, "NewModel.mlmodel")

Using flexible_shape_utils from coremltoolswe can further specify shape ranges or even set multiple input and output shapes.

Application sizes matter a lot, and certain Core ML models can take up huge chunks of storage space. Quantization works to reduce model size without any significant loss of accuracy. When the reducing size of the model, we lower the precision of weights.

The following code shows one such example of quantizing a Core ML model.

from coremltools.models.neural_network.quantization_utils import quantize_weights

model = coremltools.models.MLModel('model.mlmodel')
quantized_model = quantize_weights(model, nbits=8, quantization_mode="linear")

Some of the quantization modes which are currently supported are linear ,kmeans ,linear_symmetricand linear_lut .

Currently, Caffe and Keras converters support full precision and half precision quantization. This can be set in the model_precision argument in the converter functions. It defaults to float32 .

Core ML Tools allows us to inspect, add, delete, or modify layers. For layers that coremltools can’t convert, it allows us to set a placeholder layer by setting the argument add_custom_layers to true in the convert function:

coreml_model = keras_converter.convert(keras_model, add_custom_layers=True)

Also, we can inspect a number of layers by invoking inspect_layers on the Neural Network Builder instance.

spec = coremltools.utils.load_spec(MyModel.mlmodel) builder=coremltools.models.neural_network.NeuralNetworkBuilder(spec=spec)

The following code shows examples of adding or removing layers from the builder specs:

new_layer = neuralNetworkClassifier.layers.add()
new_layer.name = 'my_new_layer'
del nn.layers[-1]

In order to delete a range of layers, you can use layers[a:b]

On-device model training is one of the biggest advancements in Core ML 3. It allows us to personalize models from the device itself, without having to retrain server-side. In order to allow ML models to be updated on the device, we need to:

  • Mark certain layers as updatable
  • Set the loss functions and hyperparameters
  • Add training inputs specs in the builder specifications.

In another scenario, you can pass the respect_trainable=True argument to coremltools.converters.keras.convert() During model conversion, if you wish to build directly updatable Core ML models instead of modifying them later.

Currently, only neural networks and KNN models can be made updatable using coremltools .

builder.make_updatable(['layer_name_1', 'layer_name_2'])
model_spec.description.trainingInput[0].shortDescription = 'Image for training and updating the model'
model_spec.description.trainingInput[1].shortDescription = 'Set the class label here'

Also, we need to set the hyperparameters such as epochs, learning rates, and batch size of training samples for the updatable models, as shown below:

builder.set_sgd_optimizer(SgdParams(lr=0.01, batch=12))
builder.set_categorical_cross_entropy_loss(name='lossLayer', input='output')

Loss functions inside models are just like layers. Currently, binary_entropy and categorical_cross_entropy (for more than label classes) are among the few loss functions that are supported.

Finally, you need to set the isUpdatable flag on the model specification alongside the minimum specification version (Core ML has v4), as shown below:

model_spec.isUpdatable = True
model_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION

Note: Besides SGD, you can also use Adam optimizers.

It’s easy to run predictions from the coremltools Python script itself. To start, you need to load your .mlmodel using coremltools .

The following code loads an image classification .mlmodel and runs predictions on it:

import coremltools
import PIL.Image
def load_image(path, resize_to=None):
# resize_to: (Width, Height)
img = PIL.Image.open(path)
if resize_to is not None:
img = img.resize(resize_to, PIL.Image.ANTIALIAS)
return img
model = coremltools.models.MLModel('catDogModel.mlmodel')
img = load_image('./test-image.jpeg', resize_to=(150, 150))
result = model.predict({'image': img})

In the above code, we pass the model name in the .mlmodel class constructor. Optionally, you can restrict the model to run on CPU only by setting the boolean argument useCPUOnly=True in the constructor.

Next, we load the image using PIL(pillow package) into our custom-made Core ML Model and resize it to fit the input constraints (150×150 for this model).

We ran the above Python script using the following image and got the output as a cat (exact output is available in image caption).

{u’classLabel’: u’Cat’, u’output’: {u’Dog’: 0.0, u’Cat’: 1.0}}

Besides testing your model’s accuracy as we did above, you can also debug your model layers and print out the specs, a summary, or you can visualize your model by invoking visualize_spec() on the .mlmodel.

Core ML 3 brings in lots of new control flow layers that can give rise to building different neural network Core ML models programmatically using the Neural Network Builders. Here’s an example of that right from the docs.

Also, the new release of coremltools brings in support for TensorFlow 2.0 converters as well. Moving forward, you can try adding activation layers to models, quantize, and evaluate models before you deploy them in your applications.

That wraps up this piece. I hope you enjoyed reading.

Leave a Comment