Internship: Object Detection Regularization
Braincreators / Amsterdam (NL)Apply on site
Internship: Relational Background Knowledge and Scene Graphs for Object Detection Regularization
At BrainCreators, we're at the forefront of applied AI with many years of successful research internship projects that combine cutting edge science with the challenges of applying AI in the real world. The focus of this year’s AI research internship projects will be on the technical challenges at the heart of our Machine Learning platform, BrainMatter.
What we expect from you
- A full-time commitment to the research internship project.
- A solid background in the theoretical subjects relevant for your particular project and ML coding skills in pyTorch.
- Good communication and presentational skills, and a willingness to learn as much as possible in this exciting year.
- Your project will have a scientific component on which you are encouraged to work towards a publishable paper at the end of the year.
- Your project will also have an applied component, the result of which is a functional and documented piece of cutting-edge software that can be integrated into BrainMatter.
- Bachelor’s degree in Artificial Intelligence or related field.
What we can offer you
- The opportunity to work in our research team as a full time member.
- A workplace in our Prinsengracht HQ with access to our compute cluster if required.
- Support and supervision, including a weekly personal supervision meeting and research team group meeting as well as support for integration into our software stack when needed.
- Internal weekly workshops about scientific and industrial progress.
- Become part of a vibrant team of AI realists that know how to get things done.
- Our best interns will be offered a full time job opportunity after graduation.
The frontal pose of the human face has a vertical line of symmetry, one nose above the mouth and below the two eyes. This is an obvious fact to all of us. However, most Deep Learning facial understanding never used a descriptive, explicit piece of information like this. Instead it relied only on a very large set of examples, and its implicit information encoded in the input space distribution only. Although this is currently an accepted state of affairs in Deep Learning, it might not be the best way forward in the future. In particular, if the application concerns object types for which there simply are not many examples, while explicit relational and rule-based descriptions are almost trivially available.
In this project we would like to focus on the particular approach of using explicit object descriptions as regularizers for the learning process. The central idea is that, instead of introducing bias in the form of more training data, the explicit object descriptions would be exploited to act as regularizers on the learning process. One example would be the work on Logic Tensor Networks (LTNs) for semantic image interpretations  that allows the formulation of logical soft-constraints to be integrated into the Deep Learning pipeline in a differential way.