Areas where artificial intelligence makes important decisions about people are problematic, such as whether someone gets a job or whether someone has to go to prison or what medical diagnosis is made. This is also problematic because artificial intelligence learns from past data, because if, for example, banks have disadvantaged a group of people in the past when granting loans, then this is also reflected in the training data from which the artificial intelligence learns, i.e. it continues the disadvantage. Artificial intelligence is therefore fundamentally structurally conservative. One must therefore abandon the idea that artificial intelligence would do anything more objectively than humans, because it simply continues what it has learned from the training data and, unlike humans, only gives the appearance of objectivity. Anyone using artificial intelligence should therefore always assume that it has a bias, i.e. a bias that it has learned from the training data. For example, if women are disadvantaged in the personnel selection process simply because they were hired less often in the past, even though they are just as suitable for the job.