Exercise: Python Chopper
In this exercise, you can try out an object detection model trained internally at dev-tools.ai.
Last updated
Was this helpful?
In this exercise, you can try out an object detection model trained internally at dev-tools.ai.
Last updated
Was this helpful?
Navigate to and click on "Open in Colab" at the top.
Follow the steps on the notebook, executing each of the Python code blocks in the order in which they appear. To execute a code block, click the Play icon to the left of the block.
Note: After executing the first code block, an upload file widget will appear. This widget allows you to upload an image file from your computer. This, in turn, is the image that will be sent to the deep learning model for object detection purposes.
Note that some steps may take a few seconds to execute.
When viewing the results, you can see all detected elements, each with a screenshot + bounding box, the element type, and a confidence score between 0 and 1.
See a full example below:
Discussion Topic: How useful do you think this would be for testing purposes? Note that this approach allows us to locate elements on an application screen without using a Document Object Model (DOM) or similar page source for mobile applications.
What are some drawbacks of relying on the DOM or on page source?
For making it this far. Perhaps even without knowing it, you just used deep learning to identify elements using only a screenshot! How cool is that?
In this exercise, you will:
Enter the following credentials:
Username
should be your e-mail address or first + last name.
Password
should be admin
.
Click on Sign In
.
After a few short moments, your unique Jupyter notebook server will be created for you.
Double click the chopper.ipynb
notebook.
Follow the steps on the notebook, executing each of the Python code blocks in the order in which they appear. To execute a code block, first, click the block, then click the Play icon.
Note: After executing the first code block, an upload file widget will appear. This widget allows you to upload an image file from your computer. This, in turn, is the image that will be sent to the deep learning model for object detection purposes.
Note that some steps may take a few seconds to execute! We're running deep learning models behind the scenes!
When viewing the results, you will be able to see a Screenshot with Bounding Box
, and a Chopped Element
screenshot for every resulting element that is found.
See a full example below:
Discussion Topic: How useful do you think this would be for testing purposes? Note that this approach allows us to locate elements on an application screen without using a Document Object Model (DOM) or similar page source for mobile applications.
What are some drawbacks of relying on the DOM or on page source?
For making it this far. Perhaps even without knowing it, you just used deep learning to identify elements using only a screenshot! How cool is that?
, powered by JupyterHub
the chopper.ipynb
Python notebook
the specialized element detection model
produced by the element detection model
Navigate to and make sure you arrive at the following login screen: