u:caan recycling sorting
U:CAAN, who are they?
U:CAAN is a young start-up that places itself on the market with the mission of incentivising recycling by offering a gamified approach to the task. Founded in 2018 by Matthew Pollen and Daniel Faughnan, their mission statement is “Recycling. Gamified.”
What do they do?
Their goal is to tackle the 700,000 plastic bottles that are littered every day in the UK by bringing to the consumer the most intuitive and engaging smart waste receptacle that automatically sorts waste ready for recycling, prompts recycling behaviour and rewards recyclers.
The Solution: Digital Advertising Recycling Pods
Utilising image recognition and machine learning, the DARPs will automatically identify, sort and separate the waste inserted. When not in use it acts as a traditional digital out of home advertising platform.
Where do I fit in?
I worked as a Computer Vision Engineering Intern for the company U:CAAN Ltd where I gained experience in image recognition algorithms, training with custom datasets, Docker, APIs and implementation of image recognition on the Raspberry Pi.
My role was to create the working technical prototypes for both products to demonstrate their viability, which I was able to successfully deliver by the end of the placement.
What I did:
I worked on two projects during the placement: the Digital Advertising Recycling Pods (DARPs) and the LittaHunt App.
For the DARPs, given there was no physical prototype, I started by brainstorming the user journey and touch points of the customer with the guidance of my supervisor to create a technical specification for the system, shown below. After defining the specifications I researched which image recognition algorithm was the most suitable one for this application and found it to be YOLOv3.
I trained YOLOv3 with a custom dataset that had to be pre processed to be in the correct format, in order to detect the four following categories: Aluminium Cans, Glass Bottles, PET Bottles, and HDPE Milk Bottles. The graph below shows the training process of the YOLOv3 algorithm, and I also then trained the tiny-YOLOv3. I selected the weights of both at 6400 iterations to avoid overfitting, and the results were very successful.
I then trained the tiny-YOLOv3 with the same dataset, as this was designed to be compatible with a small microcontroller.
Once the algorithm was trained, I connected the Raspberry Pi to a camera and optimised it to run neural network computations with a package called NNPACK and installed Darknet, the framework in which YOLOv3 was built and trained on.
By the end of the first half of the placement, I’d designed and implemented from scratch a system for which I created a working prototype, where the camera would capture an image of the litter, run the trained YOLOv3 and return the output of the prediction via the R-Pi
My second main contribution was to the LittaHunt App. I also began this project by brainstorming the system design to establish the technical specifications.
I then researched how to best implement this system by looking further into APIs and client-server applications and found “Flask” to be the best method, given how simple and intuitive it is, and most importantly, its ability to scale up to complex applications.
The final prototype I produced was a client that would post an image to the server, where the inference code was called to run the algorithm on the sent image, and then return the prediction back to the client.
Finally, I put this entire application on Docker, which is a program that allows developers to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package.
This allows the application to be portable and easily deployable as well as scalable, and it will save the company’s future software developers time and effort by allowing the development team and the testing and deployment team to work on the same environments and remove the common issues of missing dependencies, conflicting installations or deprecated packages.
Alongside the aforementioned, my research into Amazon Web Services’ (AWS) cloud solutions has provided the company with a thorough insight as to which solution is the most cost effective for U:CAAN’s applications whilst taking into account its future scalability.
I set up a virtual machine instance on AWS Elastic Cloud Compute (EC2) with all the necessary requirements to run the trained YOLOv3, including CUDA, cuDNN, OpenCV, Darknet and of course the image recognition algorithm’s trained weights. This will be crucial to U:CAAN as it begins deploying its products and its need for processing power and computing resources rapidly increases.
My contributions are mainly technical deliverables that are of great value to the company to be able to present to grant applications, investors and potential market collaborations as proof of concept that their products are indeed viable and have been successfully prototyped and tested.
I have made sure to adopt the best practises in the field of computing that will save the company time and effort, and as a result, costs further down the line by planning in advance for the long term of the company’s growth, and ensuring that my prototypes are scalable, portable and deployable.