In this section, you will deploy the model you just created to the OpenShift AI Model Server.
Note: If something went wrong during model training in the previous section, you can still follow this section starting with the first part titled FALLBACK.
In the OpenShift AI dashboard, open the left menu and click on Data Science Projects.
Click on the project corresponding to your username.
Select the Data connections tab.
Click Add data connection and enter the following information:
Model RegistryuserX ⫷ REPLACE WITH YOUR USER IDminio123https://minio-s3-minio.apps.crazy-train.sandbox1781.opentlc.comnonemodel-registryIn the OpenShift AI dashboard, open the left menu and click on Data Science Projects.
Click on the project corresponding to your username.
Select the Models tab.

Click on Add model server

Enter the following information:
Traffic Sign Detection1The result should look like this:

Under Models and model servers, to the right of the model server you just created, click Deploy model.

Enter the following information:
newTraffic Sign Detection, which should already be automatically selectedmodels/model.onnx (or default/model.onnx if you followed the FALLBACK section)The result should look like this:

Click Deploy to start the model deployment.
Wait a few moments. If the model is successfully deployed, its status will turn green after a few seconds.

Now we will verify that the model is working correctly by querying it!
Once the model is served, we can use it as an endpoint that can receive requests. We send a REST (or gRPC) request to the model and receive a response. This endpoint can be used by applications or other services.
First, obtain the endpoint URL. To do this, click on the Internal endpoint details link in the Inference endpoint column.
In the popup that appears, you will see several URLs associated with our model server.

Copy the restUrl, which should look like http://modelmesh-serving.{userX}:8008. We will now use this URL to query the model.
Return to your Workbench, i.e., the Jupyter environment via the Workbenches tab.
Open the Notebook inference/inference.ipynb.
Update the variable RestUrl with the URL you copied previously to your clipboard.

Run all cells in the notebook using the double-arrow ▶▶ icon, and take a moment to observe the code execution.
The Base model detection section queries the base model, deployed globally for all participants.
The New model detection section uses the RestUrl endpoint to query the model you trained and deployed.
You should notice that with the base model, only standard traffic signs are detected.
After retraining, your model can now better recognize LEGO traffic signs. Congratulations!