It is important to understand the key concepts we will be using in this AI workshop.
These components work together to provide a complete end-to-end process, from experimental development to model deployment.
In the OpenShift AI dashboard, open the Data Science Projects menu on the left:

Each participant has been assigned a unique identifier at the start of the workshop. A project with the same name has been created for you. Click on it to open it. You should arrive at a page similar to this:

We have deployed a MinIO instance to handle object storage in the cluster. You will need to add a Data Connection pointing to this storage.
Scroll to the bottom of the project page and click on Data connections:

The page will be empty for now.
Click on Add data connection and enter the following information:
pipelinesuserX ⫷ REPLACE WITH YOUR ASSIGNED USERNAMEminio123https://minio-s3-minio.apps.crazy-train.sandbox1781.opentlc.comnoneuserX ⫷ REPLACE WITH YOUR ASSIGNED USERNAMEThe result should look like this:

It is recommended to create the Pipeline Server now, which will host your AI pipelines later. That’s what we will do now:
In the top menu, open the Pipelines tab and then click on Configure pipeline server.

In the dropdown menu with the key icon, select the Data Connection you created earlier (named pipelines) to automatically fill the form with the saved information, and click the Configure pipeline server button.

Wait for the Pipeline Server to finish creating. When it is ready, your screen should look like this:
⚠️ It is essential to wait until the Pipeline Server creation is complete.
At this stage, you are connected to your Data Science Project, the Data Connection has been configured for storage, and your Pipeline Server is now ready and deployed for pipelines.