Machine Learning: Guiding Principles

Introduction

Mercanto’s responsibility as a Machine Learning (ML) platform provider is to use all the tools at our disposal to help our clients improve the quality and effectiveness of their marketing. At the same time, we have collated a set of principles that we will use as a code of ethics to abide by when developing our ML.

These principles have been developed because it is critical to understand the ethical considerations of our work. Focussing exclusively on enhancing the machine learning capabilities of a software platform doesn't adequately consider human needs. An ethical, human-centric system must be designed and developed in way that aligns with the ethics of the community it reflects. These are based on common standards of right and wrong that prescribe how people ought to behave, usually regarding virtues, rights, social norms, or obligations to society.

The five principles outlined below are created to ensure that ML developed by Mercanto is socially beneficial and accountable to people while being built and tested for safety and free from algorithmic bias.

These principles are not theoretical concepts; they are real standards that will actively govern our product development and will impact our business decisions.

Guiding Principles for Machine Learning

Machine Learning Guiding Principles
ML: Guiding Principles

Designing ML to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values. With these principles, we want to improve as a company at the same time as making the world a better place for everyone. To do so, we are committed to delivering an ML platform that will be:

  • Human-centric

    Human values are always the primary consideration. ML systems should stay under human control and be driven by values-based considerations. The development and use of ML should not be seen as a means in itself but should generate tangible benefits for marketers and consumers alike. Mercanto is conscious of the fact that the implementation of ML in our platform should in no way target vulnerable populations or lead to a negative impact on human rights.

  • Fair & unbiased

    Bias is a prejudice for or against something or somebody, that may result in unfair decisions. Since humans design ML systems, it is possible that humans inject their bias into them, even in an unintended way. We will work to avoid discriminatory or biased results. We guarantee that we will not design any discriminatory elements into our platform, particularly those related to sensitive characteristics such as race, colour, ethnicity, national origin, religion, disability, sex, gender, expression, gender identity, or sexual orientation.

  • Safe & secure

    ML systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible. We will continue to embed security and safety technologies into our platform and will develop our systems following ML safety best practices.

  • Privacy-focused

    ML systems should respect privacy. We will incorporate the privacy principles outlined in our privacy policy in the development and use of our ML technologies. When processing personal or aggregated data, we will at all times comply with the principles of lawfulness, fairness and transparency, data minimisation, accuracy, storage limitation, integrity and confidentiality.

  • Transparent & explainable

    We will be explicit about the kind of personal and/or non-personal data Mercanto uses as well as about the purpose that data is used for. ML systems should have algorithmic accountability: when the Mercanto platform makes a decision, we will maintain the technical and organisational measures required to demonstrate the logic behind those decisions.

Looking to the future

These principles provide an intentional framework for building and using Mercanto's ML systems. While this is how we're choosing to approach ML development right now, we recognise this area is continuing to evolve and we will work with internal and external stakeholders to review and evolve these ethical focus areas as our ML capabilities increase over time.