Explainable AI

Explainable AI

Artificial Intelligence

While in traditional software a programmer implements fixed rules for the program, the AI derives these decision rules itself: Here, the computer uses learning methods or algorithms to derive statistical regularities or patterns from existing data. These patterns can be represented by different models and allow us to make accurate predictions for new data. However, for a variety of AI applications, the learned decision rules of established AI technologies are hard to understand.
Compared to the traditional program where the decision rules are predefined and therefore transparent and
comprehensible for the programmer, many of the established AI technologies still represent 
a „black box“: It is often neither comprehensible nor transparent how the system arrives at decisions or results.

Explainable AI

The field of explainable AI is all about making the learning and decision-making process of AI comprehensible and transparent for the developer and applicant of an AI software. In general, we distinguish between local and global explainability or interpretabilityGlobal interpretability explains, how the model makes decisions based on a holistic view of its features and learned components. Local interpretability, on the other hand, examines individual predictions to explain why the model made the appropriate prediction for that particular input.

Global explainability

Which features of my data are important for the actual decision output? Can I make my data set lighter by discarding not useful features? Can the machine learning method explain which information is used for making decisions? These are the questions solved by global explainabilityBy accessing physical quantities inside the network, we can explain your data, give insights on which are the crucial features of your problem and reduce the complexity of any data set while maintaining the needed accuracy.

Local explainability

An AI agent is often unable to explain its decision-making process for any single data instance, thus severely limiting all the possible applications in sensible areas. In any context where an ethical requirement is present, every decision taken by an AI agent shall be transparent and understandable. This is the concern addressed by local explainability. For any prediction taken by our algorithm, we can measure how any aspect of your data is influencing the outcome and explain why a certain decision has been made.

Vision

Fair, transparent and efficient: Shaping an ethical future for AI technologies.

 

Mission

The project Tensor Solutions aims to make the field of Artificial Intelligence (AI) more transparent, comprehensible and efficient. With our trusted and explainable AI technology, we intend to meet the need for better verifiable of Machine Learning applications and simultaneously addresses the ethical concerns towards these technologies. Consequently, we aim not only to increase the acceptance of AI technologies in general but also to provide the best individual and comprehensible Machine Learning solution for each of our customers.

Values

Vision

Fair, transparent and efficient: Shaping an ethical future for AI technologies.

 

Mission

The project Tensor Solutions aims to make the field of Artificial Intelligence (AI) more transparent, comprehensible and efficient. With our trusted and explainable AI technology, we intend to meet the need for better verifiable of Machine Learning applications and simultaneously addresses the ethical concerns towards these technologies. Consequently, we aim not only to increase the acceptance of AI technologies in general but also to provide the best individual and comprehensible Machine Learning solution for each of our customers.

Values