# Exercise: Python Chopper

## Updated Instructions (2024)

### Open Notebook on Google Colab

Navigate to <https://github.com/dionny/ai-tutorial-notebooks/blob/main/chopper.ipynb> and click on "Open in Colab" at the top.

<div align="left"><figure><img src="/files/UjKkNHqrPj6mGUf6u2SV" alt=""><figcaption></figcaption></figure></div>

### Execute Element Detection Model

Follow the steps on the notebook, executing each of the Python code blocks in the order in which they appear. To execute a code block, click the Play icon to the left of the block.

<div align="left"><figure><img src="/files/RytOUOjqepwLmJugKnNo" alt=""><figcaption></figcaption></figure></div>

**Note:** After executing the first code block, an upload file widget will appear. This widget allows you to upload an image file from your computer. This, in turn, is the image that will be sent to the deep learning model for object detection purposes.

<div align="left"><img src="/files/-Ml8ASgPJ9rlFuRim9MF" alt="Widget For Uploading a File"></div>

{% hint style="info" %}
After uploading a file, simply follow the instructions in the notebook, executing each of the code blocks in the order in which they appear.
{% endhint %}

{% hint style="danger" %}
Note that some steps may take a few seconds to execute.
{% endhint %}

### Viewing the Results

When viewing the results, you can see all detected elements, each with a screenshot + bounding box, the element type, and a confidence score between 0 and 1.

See a full example below:

<figure><img src="/files/1wBlsY7B0cO6Qw9WrbbY" alt=""><figcaption></figcaption></figure>

{% hint style="success" %}
**Discussion Topic:** How useful do you think this would be for testing purposes? Note that this approach allows us to locate elements on an application screen without using a Document Object Model (DOM) or similar page source for mobile applications.
{% endhint %}

{% hint style="warning" %}
What are some drawbacks of relying on the DOM or on page source?
{% endhint %}

### Congratulations!

For making it this far. Perhaps even without knowing it, you just used deep learning to identify elements using only a screenshot! How cool is that?

## Legacy Instructions (Deprecated)

In this exercise, you will:

* [Create your very own Jupyter server](#set-up-your-jupyter-server), powered by JupyterHub
* [Launch](#launch-chopper-notebook) the `chopper.ipynb` Python notebook
* [Execute](#execute-element-detection-model) the specialized element detection model
* [View the results](#viewing-the-results) produced by the element detection model

### Set Up Your Jupyter Server

Navigate to <https://jupyterhub.dionny.dev> and make sure you arrive at the following login screen:

![Jupyter Server Login Screen](/files/-Ml8ASgLmAhr2whw_mxy)

Enter the following credentials:

* `Username` should be your e-mail address or first + last name.
* `Password` should be `admin`.

{% hint style="info" %}
Your e-mail information is not being collected. We need to ensure your username is unique when considering all tutorial attendees.
{% endhint %}

Click on `Sign In`.

After a few short moments, your unique Jupyter notebook server will be created for you.

![](/files/-Ml8ASgMypzOKkDZr7nO)

### Launch Chopper Notebook

Double click the `chopper.ipynb` notebook.

![Your Own Unique Jupyter Server](/files/9pWmWjCvurnSj3MJ7jip)

### Execute Element Detection Model

Follow the steps on the notebook, executing each of the Python code blocks in the order in which they appear. To execute a code block, first, click the block, then click the Play icon.

![Executing a Code Block on Jupyter](/files/-Ml8ASgOMLVzT0OVpZ35)

**Note:** After executing the first code block, an upload file widget will appear. This widget allows you to upload an image file from your computer. This, in turn, is the image that will be sent to the deep learning model for object detection purposes.

![Widget For Uploading a File](/files/-Ml8ASgPJ9rlFuRim9MF)

{% hint style="info" %}
After uploading a file, simply follow the instructions in the Jupyter notebook, executing each of the code blocks in the order in which they appear.
{% endhint %}

{% hint style="danger" %}
Note that some steps may take a few seconds to execute! We're running deep learning models behind the scenes!
{% endhint %}

### Viewing the Results

When viewing the results, you will be able to see a `Screenshot with Bounding Box`, and a  `Chopped Element` screenshot for every resulting element that is found.

See a full example below:

![](/files/-Ml8ASgQSUMbZxIBwY9X)

{% hint style="success" %}
**Discussion Topic:** How useful do you think this would be for testing purposes? Note that this approach allows us to locate elements on an application screen without using a Document Object Model (DOM) or similar page source for mobile applications.
{% endhint %}

{% hint style="warning" %}
What are some drawbacks of relying on the DOM or on page source?
{% endhint %}

### Congratulations!

For making it this far. Perhaps even without knowing it, you just used deep learning to identify elements using only a screenshot! How cool is that?


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ai-tutorial.dionny.dev/exercises/exercise-3-python-chopper.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
